Loading…

Log in to bookmark your favorites and sync them to your phone or calendar.

Monday, November 18
 

08:30 GMT

Registration (Workshops)
Monday November 18, 2019 08:30 - 09:30 GMT
Front registration desk Puddle Dock, London EC4V 3DB

09:30 GMT

Audio signal processing - A practical guide for coders (Level: Intermediate)
Limited Capacity filling up

This hands-on workshop will focus on building a deep, intuitive understanding of fundamental signal processing concepts with an emphasis on problems specific to audio. The session will be driven by coding examples of gradually increasing complexity rather than textbook definitions, axioms and theorems.

After finishing this workshop, participants will be able to build and analyse common audio signal processing systems and understand their key properties.

While maths cannot be fully avoided when discussing this topic, relying on mathematics will be kept to a minimum, and their usage will be demystified and explained in detail when necessary.

A C++/JUCE project with the coding examples, exercises and analysis tools used throughout the session will be shared with participants shortly before the workshop.

Speakers
avatar for Zsolt Garamvölgyi

Zsolt Garamvölgyi

Audio DSP Consultant
Zsolt Garamvölgyi is a freelance Audio DSP Consultant. In the last five years he has been working with ROLI on the company's cross-platform synthesiser engine, Equator. Before ROLI he was a DSP Engineer at Blackstar Amplification where he developed the algorithms powering the highly... Read More →


Monday November 18, 2019 09:30 - 12:30 GMT
Cripplegate Room Puddle Dock, London EC4V 3DB

09:30 GMT

Build your first audio plug-in with JUCE (Level: Beginner)
Limited Capacity seats available

This workshop will guide you through the process of creating your first audio plug-in using the JUCE framework.
Writing an audio plug-in can be a daunting task: there are a multitude of plug-in formats and DAWs, working with live audio requires knowledge of real-time programming, and sharing data between the audio and GUI threads is full of subtleties. In this workshop you will learn from the experts the best practices in creating plug-ins using JUCE, with a focus on thread safety and DAW compatibility.

This workshop will cover
  • An introduction to JUCE
  • Configuring a plug-in project
  • Adding parameters to your plug-in and accessing them safely
  • Creating a basic GUI
  • Methods to simplify your plug-in’s code
  • Debugging and testing your plug-in

During the workshop attendees will create a simple audio plug-in under the guidance of the JUCE developers.


Speakers
avatar for Ed Davies

Ed Davies

JUCE Software Engineer, JUCE
avatar for Tom Poole

Tom Poole

Lead Software Developer, JUCE


Monday November 18, 2019 09:30 - 12:30 GMT
Dowgate Room

09:30 GMT

Turn a JUCE plugin into a hardware instrument with Elk Audio OS (All Levels)
Limited Capacity filling up

In this workshop we present strategies and tools, using which existing audio plug-ins can very efficiently be deployed on high-performance embedded Linux devices, using Elk Audio OS, which will be released as open-source shortly after the event.


  • Each participant will create a complete hardware instrument, on which one or more JUCE synthesizer and/or FX plug-ins are running natively.
  • It will be possible to alter a few central parameters using physical controls. The other parameters will be accessible from a laptop, phone or tablet.
  • Participants will have learned and practised the steps needed during the workshop.

Elk will provide all needed hardware: a Raspberry PI board, one of our custom audio shieds, and a board with the physical knob, button and LED controls. We will also provide all the necessary software – a package centred on the Elk Audio Operating System and a build server ready to build and deploy plugins on the boards.


Requirements for participants:
  • A laptop (macOS / Windows / Linux) with an SSH terminal.
  • A working installation of the latest JUCE and Projucer 5.4.5 to build the examples.
  • Basic Linux / BASH knowledge.
  • Optional, but a good plus: basic Python knowledge.
  • Headphones with 3.5mm jack connector.

Additional requirements for those that want to build their own plugins
:

Participants interested in setting up the toolchain before the workshop can get in touch with the organizers by writing to developers@elk.audio.

Speakers
avatar for Gustav Andersson

Gustav Andersson

Senior Software Engineer, Elk
Will code C++ and python for fun and profit. Developer, guitar player and electronic music producer with a deep fascination with everything that makes sounds in one form or another. Currently on my mind: modern C++ methods, DSP algos, vintage digital/analog hybrid synths.
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk
Computer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live performance and use by experts, mainly using C++.
avatar for Stefano Zambon

Stefano Zambon

CTO, Elk
Wearing several hats in a music tech startup building Elk Audio OS. Loves all aspects of music DSP from math-intense algorithms to low-level kernel hacking for squeezing latency and performance.


Monday November 18, 2019 09:30 - 12:30 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

11:00 GMT

Apple Audio Office Hours (Level: All)
Limited Capacity seats available

New to developing audio applications for Apple platforms?  Need help getting your Audio Unit to work properly?  Not sure how to integrate MIDI into your application?  Bring your laptop, code, and questions to this workshop to get help from Apple audio experts.  From low latency, real-time APIs for audio I/O, to Audio Unit instruments and effects, to CoreMIDI and beyond, Apple platforms provide a rich set of APIs for creating anything from simple audio playback applications to sophisticated digital audio workstations.   This won't be a typical workshop, rather a drop-in session where attendees can get advice from Apple experts.  Peter and Béla will be supported by other members of the Core Audio Team.   

Speakers
avatar for Peter Vasil

Peter Vasil

Software Engineer, Apple
Peter is an audio software developer with the Core Audio team at Apple, working on various APIs, such as AVAudioEngine, Audio Units and CoreMIDI. Before joining Apple, he was a key member of the Maschine development team at Native Instruments. He also worked at Ars Electronica Futurelab... Read More →
avatar for Béla Balázs

Béla Balázs

Software Engineer, Core Audio Team, Apple
Béla Balázs is a Software Engineer on the Core Audio team at Apple, working on a variety of system frameworks, including APIs for Audio Units, AudioQueue and AVAudioEngine. Before joining Apple, he worked at Native Instruments in Berlin, on products including Maschine, Replika XT... Read More →


Monday November 18, 2019 11:00 - 12:30 GMT
Lower River Room Puddle Dock, London EC4V 3DB

12:30 GMT

Lunch
Monday November 18, 2019 12:30 - 14:00 GMT
Upper River Room Puddle Dock, London EC4V 3DB

14:00 GMT

The Android Studio Sessions
Limited Capacity seats available

Join the Android Audio team for a 1:1 session to discuss your app. Bring us code, questions, issues and ideas - anything related to Android app development. We'll have Software Engineers, Developer Programs Engineers and Developer Advocates ready to help you take your app to the next level. 
Sign up for a 15 minute slot here

Speakers
avatar for Glenn Kasten

Glenn Kasten

software engineer, Google
Glenn Kasten is a software engineer in the Android media team, with a focus on low-­level audio and performance. His background includes real-­time operating systems and embedded applications, and he enjoys playing piano.
avatar for Don Turner

Don Turner

Developer, Google
Don helps developers to achieve the best possible audio experiences on Android. He spends a lot of time tinkering with synthesizers, MIDI controllers and audio workstations. He also has an unwavering love for Drum and Bass music and used to DJ in some of the worst clubs in Nottingham... Read More →
avatar for Phil Burk

Phil Burk

Staff Software Engineer, Google Inc
Music and audio software developer. Interested in compositional tools and techniques, synthesis, and real-time performance on Android. Worked on HMSL, JForth, 3DO, PortAudio, JSyn, WebDrum, ListenUp, Sony PS3, Syntona, ME3000, Android MIDI, AAudio, Oboe and MIDI 2.0.
avatar for Atneya Nair

Atneya Nair

Intern, Audio Development, Google
I am a second year student at Georgia Tech studying Computer Science and Mathematics. I have particular interest in mathematical approaches to audio generation, processing and transformation. I recently completed an internship at Google focusing on Audio development on the Android... Read More →


Monday November 18, 2019 14:00 - 17:00 GMT
Lower River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Build a synth with SOUL (Level: Beginner)
Limited Capacity filling up

The SOUL language and audio platform was announced last year at ADC2018 and has since been available for people to play with on our soul.dev website. In this workshop, its creators Julian Storer and Cesare Ferrari will introduce you to the syntax and structure of the language and help you to build a synthesiser from first principles. At the end, there will be some hardware devices available on which participants will be able to run their creations.
This workshop will cover:
  • How to use the soul.dev website or a local SOUL installation to write and test your code
  • The philosophy and principles behind the SOUL architecture and syntax
  • How to create the building blocks of a simple synth from scratch
Requirements:
  • A Mac or Windows laptop
  • Some basic familiarity with audio coding of some kind

Speakers
avatar for Julian Storer

Julian Storer

Programmer, ROLI
Creator of the widely-used C++ framework JUCE, and the DAW Tracktion.Currently working at ROLI where I helped launch our BLOCKs products, and where I work on JUCE and creating a new universal audio platform: SOUL.
avatar for Cesare Ferrari

Cesare Ferrari

Developer, ROLI


Monday November 18, 2019 14:00 - 17:00 GMT
Dowgate Room

14:00 GMT

Introduction to deep learning speech synthesis (Level: All)
Limited Capacity filling up

This workshop introduces the core concepts of modern deep learning based speech synthesis and allows users to learn-by-doing.
We will get hands-on experience with synthesising, training and analysing these research-derived models.
By the end, we will have a good understanding of how to make machines talk in a very human-like way.

  • 1st part: Introduction
  • 2nd part: Synthesise (Practical)
  • 3rd part: Understanding
  • 4th part:Training (Practical)
  • 5th part: Analysis/Evaluation (Practical)
  • 6th part: Where next? 
Session length: 1.5 hours

Speakers
avatar for John Flynn

John Flynn

CTO, Speak AI
avatar for Ines Nolasco

Ines Nolasco

Machine learning researcher, Speak AI


Monday November 18, 2019 14:00 - 17:00 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

14:00 GMT

Writing applications with JUCE audio back-end and JavaScript front-end (React Native/Electron) (Level: Advanced)
Limited Capacity full

In this workshop we will focus on the integration of a React Native front-end (running natively on mobile and in Electron on desktop), a very popular choice amongst front-end developers not only for web applications but also for mobile and desktop platforms, with a JUCE based audio processing back-end. We will be writing code in C++, JavaScript (TypeScript), Objective-C and Java. 

Speakers
avatar for Tom Duncalf

Tom Duncalf

Senior Software Engineer, Independent
Software developer currently leading development of the LUMI app for ROLI, using React Native, JUCE and Unity.
avatar for Lukasz Kozakiewicz

Lukasz Kozakiewicz

Senior Software Engineer, ROLI
Having passion for music and technology, Lukasz worked on products such as AKAI MPC software, JUCE framework as well as ROLI mobile and desktop apps. Currently working on the next generation of ROLI software, and writing code in various languages including C++, Objective-C, Java... Read More →


Monday November 18, 2019 14:00 - 17:00 GMT
Cripplegate Room Puddle Dock, London EC4V 3DB

17:00 GMT

Registration (Conference attendees)
Monday November 18, 2019 17:00 - 18:00 GMT
Front registration desk Puddle Dock, London EC4V 3DB

18:00 GMT

Welcome Party & Social Mixer in partnership with Apple: Quiz & Pizza
Limited Capacity seats available

Monday November 18, 2019 18:00 - 21:30 GMT
Upper River Room Puddle Dock, London EC4V 3DB
 
Tuesday, November 19
 

08:00 GMT

Registration
Tuesday November 19, 2019 08:00 - 09:00 GMT
Front registration desk Puddle Dock, London EC4V 3DB

09:00 GMT

Welcome address
Speakers
avatar for Jean-Baptiste Thiebaut

Jean-Baptiste Thiebaut

ADC director, JUCE


Tuesday November 19, 2019 09:00 - 09:30 GMT
Auditorium Puddle Dock, London EC4V 3DB

09:30 GMT

Keynote: Coding with Your Cave-Brain
This year, the title of Jules' keynote is "Coding with Your Cave-Brain".

Speakers
avatar for Julian Storer

Julian Storer

Programmer, ROLI
Creator of the widely-used C++ framework JUCE, and the DAW Tracktion.Currently working at ROLI where I helped launch our BLOCKs products, and where I work on JUCE and creating a new universal audio platform: SOUL.


Tuesday November 19, 2019 09:30 - 10:30 GMT
Auditorium Puddle Dock, London EC4V 3DB

10:30 GMT

Coffee break
Tuesday November 19, 2019 10:30 - 11:00 GMT
The Mermaid

10:30 GMT

Poster: GPU-accelerated physical model for real-time drumhead synthesis
Speakers
avatar for Harri Renney

Harri Renney

Student, UWE
I am a PhD student studying at the University of the West of England (UWE). My research investigates the use of Graphics Processing Units (GPUs) for accelerating the processing of digital audio related tasks.Come speak to me if you're interested in these areas!


Tuesday November 19, 2019 10:30 - 11:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

10:30 GMT

Poster: iPhone as an audiometer and precision audio reproduction and measurement device
In this poster, we present a new audiometer device developed for iPhones in collaboration with leading hearing scientists at a US-based therapeutics organisation. Capitalising on recent developments in this field, we have delivered a solution that is capable of accurate measurements similar to those achieved in clinical trials using expensive audiometers operated by specialist medical professionals. In the poster we will discuss:

  • Recent developments in this field
  • The DSP and software development details of the application
  • The specific challenges around achieving accurate sound reproduction and measurement using a mobile device
  • Performance of the application and the results of the clinical trials
  • Limitations of the application and further considerations
  • The wider context - how could an accurate, mass-market audiometer application help to prevent hearing loss and what could it mean for the research and development of new treatments

Speakers
avatar for David Gibson

David Gibson

Managing Director, FutureSonic Ltd
Expert mobile software consultant, specialising in audio and music technology. FutureSonic is a mobile software studio dedicated to everything audio, music and mobile. We work directly with companies around the globe to develop software that offers users new ways to create, perform... Read More →


Tuesday November 19, 2019 10:30 - 11:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

11:00 GMT

Panel: Product design
Moderator
avatar for Chris Randall

Chris Randall

Co-Owner, Audio Damage, Inc.
Chris is the co-founder and product designer for Audio Damage. A music industry lifer, Chris has spent over three decades in the music business, as a lighting and stage designer, signed performing artist, label owner, and user interface and product designer. As a musician, Chris has... Read More →

Speakers
avatar for Matt Jackson

Matt Jackson

Product Designer for Instruments and Effects, Ableton/Surreal Machines
avatar for Mino Kodama

Mino Kodama

Design Manager, Native Instruments
Mino is a multi-disciplinary designer with a background in industrial design. 8 years ago, she joined Native Instruments to establish the in-house industrial design team. She has developed and released numerous successful hardware products for all 3 brands: Maschine, Traktor and... Read More →
avatar for Nick Dika

Nick Dika

Product Designer, Soundtoys
Nick is a product designer specializing in audio and music technology. At Soundtoys, he’s part of a dedicated team making creative, award-winning audio effects for music production and mixing. In the past, he was Product Design Director at iZotope, where he worked on mixing and... Read More →


Tuesday November 19, 2019 11:00 - 11:50 GMT
Auditorium Puddle Dock, London EC4V 3DB

11:00 GMT

11.00 - 11.25 Real-Time Applications with AVAudioEngine & 11.25 - 11.50 Audio Unit v3 Revisited
Limited Capacity seats available

11.00 - 11.25
Real-Time Applications with AVAudioEngine
AVAudioEngine provides a powerful, feature-rich API to achieve simple as well as complex tasks and to simplify real-time audio. This talk will give an introduction to AVAudioEngine and demonstrate the various ways it can be used. It will also focus on the new real-time additions introduced in macOS Catalina and iOS 13, and show how to integrate MIDI-only plug-ins with other engine nodes. The talk will conclude with a discussion of best practices and workflows with AVAudioEngine.

11.25 - 11.50
Audio Unit v3 Revisited
Version three of the Audio Unit API was originally introduced in macOS 10.11 and iOS 9, in the form of Audio Unit Extensions. It provides your app with sophisticated audio manipulation and processing capabilities, as well as allowing you to package your instruments and effects in self-contained units that are compatible with a wide variety of host applications. During this talk, we will explore some of the aspects of working with the Audio Unit v3 API. Topics we will touch on include:
- updating existing hosts to support Audio Unit Extensions
- porting existing Audio Units to the new API
- the new user preset API
- MIDI processor Audio Units






Speakers
avatar for Béla Balázs

Béla Balázs

Software Engineer, Core Audio Team, Apple
Béla Balázs is a Software Engineer on the Core Audio team at Apple, working on a variety of system frameworks, including APIs for Audio Units, AudioQueue and AVAudioEngine. Before joining Apple, he worked at Native Instruments in Berlin, on products including Maschine, Replika XT... Read More →
avatar for Peter Vasil

Peter Vasil

Software Engineer, Apple
Peter is an audio software developer with the Core Audio team at Apple, working on various APIs, such as AVAudioEngine, Audio Units and CoreMIDI. Before joining Apple, he was a key member of the Maschine development team at Native Instruments. He also worked at Ars Electronica Futurelab... Read More →


Tuesday November 19, 2019 11:00 - 11:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:00 GMT

A practical perspective on deep learning in audio software
Deep learning technology is already all around us, from ubiquitous image classification, facial recognition, and of course, animoji. These techniques have already improved the state of the art in some areas of audio processing, including source separation and voice synthesis. How can we, as audio application developers, take advantage of these new opportunities? 

The goal of this talk is to provide a practical perspective on the design and implementation of deep learning techniques for audio software. We'll start with a very brief and high-level overview about what deep learning is, and then survey some of the application areas created by the technology. We'll keep the discussion grounded on specific case studies rather than abstractions. 

Attendees should leave with a better understanding of what it would take, practically, to integrate modern deep learning techniques into their audio software products. 

Speakers
avatar for Russell McClellan

Russell McClellan

Principal Software Engineer, iZotope
Russell McClellan is Principal Software Engineer at iZotope, and has worked on several product lines, including RX, Insight, Neutron, Ozone, and VocalSynth. In his career, he's worked on DAWs, applications, plug-ins, audio interfaces, and web applications for musicians.


Tuesday November 19, 2019 11:00 - 11:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

11:00 GMT

Creating a modular control surface ecosystem
From the Seaboard GRAND to the latest release of LUMI Keys, ROLI has always pushed to create new forms of expression and interaction through a blend of innovative hardware and powerful software but this hasn't always been easy.  This talk will go into the history of the relationship between ROLI’s hardware and software products, but also the challenges these connections have created. From a product perspective, we’ll then go into the details of some of the most recent problems we have tried to solve and how we have redesigned the ecosystem to fit into a more open system of music making on desktop with ROLI Studio. We’ll talk about the development process, some technical details and also provide a demo from team members on the project.  Attendees should leave with a better understanding of the problems ROLI are trying to solve, a good background into our development process and maybe even an interest in helping grow this modular control surface ecosystem further!


Speakers
avatar for Elliot Greenhill

Elliot Greenhill

Sr Product Owner, ROLI Ltd
Elliot looks after the strategy and delivery of ROLI’s desktop software and has a passion for providing the best tools and experiences for music makers. Over the years Elliot has overseen the release of many of ROLI’s software products and the close relationship they have with... Read More →


Tuesday November 19, 2019 11:00 - 11:50 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

12:00 GMT

From jailbreaking iPhones to Grammy awarded albums: How mobile apps are shaping the future of music production
The iOS audio and music app landscape has evolved tremendously over the course of 10 years. Back when creating apps meant “jailbreaking” iPhones and installing heavily modified compilers and development tools, and today’s motivation of Apple to unify iOS & macOS, it is clear that the impact on audio apps will be massive.
Today we can browse thousands of music apps on the App Store, with the major names of the audio industry showing a growing interest in the mobile app landscape.
Back in 2008, BeatMaker was one of the first music apps available on iOS, right during the App Store launch. After three iterations, it was shaped around a moving industry, which started attracting different user profiles, from plain newcomers to renowned music producers. It took dedication and time to prove that tablets and smartphones were viable platforms to sketch musical ideas or compose full albums.
The talk will cover insights on the challenges, key moments, and share tips, and show how music apps impacted the music production world.
We will go through the evolution of the industry following this timeline:
  • The early days (2007-2008)
  • The rise of the App Store and Music Apps (2008 - 2019)
  • The mobile music production movement today
  • Future of music production
The talk will start with a clear comparison of the industry now and then, and how it evolved. Technical challenges will be exposed, and how they were overcome while keeping the innovation the main drive. We will then follow-up on how Apple and, more broadly, the industry, started recognizing iOS as a viable platform for music production. We will then review the challenges apps are facing nowadays, especially in an overcrowded marketplace like the App Store, and how we should prepare to adapt for the future.

Speakers
avatar for Mathieu Garcia

Mathieu Garcia

CEO, Mathieu Garcia
Mobile Apps Expert & Entrepreneur. I started writing apps months before the official App Store officially launched. At the time, it was a simple hobby, a side-project, within an uncharted territory. Today we can browse thousands of music apps on the App Store, with the major names... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Auditorium Puddle Dock, London EC4V 3DB

12:00 GMT

Introducing your robot lead singer (Demo 12.30 - 14.00)
In the last two years speech/singing synthesis technology has changed beyond recognition. Being able to create seamless copies of voices is a reality, and the manipulation of voice quality using synthesis techniques can now produce dynamic audio content that is impossible to differentiate from natural spoken output. More specifically, it will allow us to create artificial singing that is better than many human singers, graft expressive techniques from one singer to another, and using analysis-by-synthesis categorise and evaluate singing far beyond simple pitch estimation. In this talk we approach this expanding field in a modern context, give some examples, delve into the multi-faceted nature of singing user interface and obstacles still to overcome, illustrate a novel avenue of singing modification, and discuss the future trajectory of this powerful technology from Text-to-Speech to music and audio engineering platforms.

Many people are now enjoying producing original music by generating vocal tracks with software such as VOCALOID, UTAU and Realivox. However, quality has meant that such systems rarely play a part in professional music production. Recent advances in speech synthesis technology will render this an issue of the past, by offering extremely high quality audio output indistinguishable from natural voice. Yet how we interface and use such technology as a tool to support the artistic musical process is still in its infancy.
Users of these packages are presented with a classic piano roll interface to control the voice contained within, an environment inspired by MIDI-based synthesisers dating back to the early 1980s. Other singing synthesisers accept text input, musical score, MIDI, comprise of a suite of DSP routines with audio as input, and/or opt for the manipulation of a pre recorded sample library. In light of all these options however, currently available commercial singing synthesis generally struggles to offer the level of control over musical singing expression or style typically exploited by most real professional vocalists and composers.

The recent unveiling of CereVoice as a singing synthesiser demonstrates the ability to generate singing from spoken modelled data producing anything from a comically crooning Donald Trump to a robot-human duet on the ``Tonight Show starring Jimmy Fallon''. CereVoice's heritage as a mature Text-to-Speech technology with emotional and characterful control over its parametric speech synthesis engine offers novel insight in the ideal input that balances control and quality for its users. We exploit our unique position at the crossroads of speech technology, music information retrieval, and audio DSP to help illustrate our journey from speech to singing.
Voice technology is changing at breakneck speed. How we apply and interface with this technology in the musical domain is at a cusp. It is the ADC community that will, in the end, dictate how these new techniques are incorporated into the music technology of the future.

There will be a demo during the lunch break 12.30 - 14.00 (Tuesday 19 Nov) 

Speakers
avatar for Christopher Buchanan

Christopher Buchanan

Audio Development Engineer, CereProc
Chris Buchanan is Audio Development Engineer for CereProc. He graduated with Distinction in the Acoustics & Music Technology MSc degree at the University of Edinburgh in 2016, after 3 years as a signal processing geophysicist with French seismic imaging company CGG. He also holds... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

12:00 GMT

Life after the mixer - Software in live sound system applications
During the previous ADCs significant focus was given to software used in music production. But what happens to our precious audio signal once it leaves the mixing console during a live performance? Is there any cool or interesting software used in live sound system applications, which neither the artist nor the audience may be aware of? I'm here to tell you that there is! There are complex and compelling software algorithms involved in the planning and design of live entertainment events. Even during live performances software and signal processing play a vital role in bringing the music to every seat in the audience, bringing a smile to every face.

Starting from the planning and simulation phase of a large sound system, I'll give you a glimpse of the software tools which are used to design some of the biggest concerts out there. Some of these tools facilitate performance prediction, alignment, rigging, and also help ensure safety concerns are met before a single loudspeaker is hung. Using sophisticated mathematical models and high-resolution dispersion data, the wavefront of each loudspeaker within a line array can be synthesized. Precise simulations of the level distribution can then be calculated and viewed in a three-dimensional representation of the venue, which can significantly reduce setup and tuning time in touring applications.
 
The choice of loudspeakers can make a dramatic difference in attaining the desired level distribution. Active and passive cardioid loudspeaker designs, combined with the correct signal processing, can provide excellent sound directivity. Analogous to their better-known microphone counterparts, cardioid loudspeakers exhibit high levels emitted to the front, and low levels to the rear. Directivity enables a sound system to project sound to where it is needed (the audience) while keeping it away from areas where it is not desired, resulting in better sound quality and intelligibility.

Going over to the setup of the sound system at the actual venue, I will show you some of the tools available for checking the integrity of amplifiers and loudspeakers. Focusing specifically on line arrays, I will reveal the inner workings of Acoustic Neighbor Detection, a patented algorithm which helps ensure that individual loudspeakers within a line array or a sub-woofer array are positioned in the right order. This helps to avoid cabling errors, which are unfortunately quite common during time-critical setup procedures.

Finally, in addition to the common EQ, compressor and limiter options found on most modern power amplifiers (outside the scope of this talk), there is a significant amount of signal processing which can take place during the most important time of all, i.e. the live performance. An interesting challenge is achieving a truly uniform frequency response over large audience areas, while also compensating for air absorption effects over long distances. One of the techniques involves the use of individual sets of FIR and IIR filters for every single loudspeaker within a line array, each of which thus requiring a dedicated amplifier channel. These filters shape the sound generated by the array to precisely match a user defined level distribution and obtain the desired frequency response, achieving true "democracy for listeners".

Speakers
avatar for Bernardo Escalona

Bernardo Escalona

Software R&D, d&b audiotechnik GmbH
Software R&D at d&b audiotechnik. Bass player; music collector; motorbike freak; loves tacos.


Tuesday November 19, 2019 12:00 - 12:25 GMT
Lower River Room Puddle Dock, London EC4V 3DB

12:00 GMT

Units of measurement in C++
When writing interfaces in C++ that are expected to operate on variables that units of measurement, it is common for represent these variables as a numeric type, like float or int, and then describe the expected unit using variable names. For every different type of unit a library writer wants to support, a different function must be written (for example: setDelayTimeInSeconds(double timeInSeconds), setDelayTimeInMilliseconds(double timeInMilliseconds), setDelayTimeInSamples(size_t numSamples)). This is tedious, and if these functions are member functions, changes to a different part of the class can result in needing to change multiple functions in some way. Alternatively, having a single function that can only operate on one type of unit can make calls to that function verbose, especially if different call sites have different types of units that need to be converted before being passed into the function.

In this talk, we'll look at how we can use the type system to enforce type correctness of the units we operate on. Conversion operators can be used to reduce the amount of overloads needed to handle different types of input, as well as automatically handle converting between different units. We'll look at how unit types can be used with JUCE's ADSR and StateVariableFilter classes, how they can be integrated with an AudioProcessorValueTreeState, and, time permitting, how to use a type that represents a periodic value to simplify the writing of an oscillator class.

Speakers
avatar for Mark Jordan-Kamholz

Mark Jordan-Kamholz

Developer, Sinecure Audio
Mark Jordan-Kamholz is an Audio Software Developer and Designer. Currently based in Berlin, he has spent this year programming the sounds for Robert Henke's 8032av project in 6502 assembly. For the past 3 years, he has been making plugins in C++ using JUCE with his company Sinecure... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

12:30 GMT

Careers Fair
If you're looking for the next step in your career, please take the opportunity to sit down with companies who are actively recruiting.  Apple, Syng and Moodelizer want to meet you!  Grab some lunch and head to the exhibition area nearest to the stairs.   

Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate

12:30 GMT

Lunch
Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: BandLoop - from idea to reality
#TRIBE Instruments#

TRIBE Loopz (aka BandLoop) is a tool which adapts the live-looping technique to the workflow of bands. It allows each player of the band to overlays (in real time) loops of different (or the same) instruments. Adding loops on top of the others, a n-limited-piece band can create a piece of music composed by an unlimited number of instruments.  
Loopz is composed by a software implemented with JUCE and a bunch of pedals. The Pedals send wireless messages to software which is in charge of collecting the audio inputs from an external audio interface and live looping the audio materials. The decision of implementing Wireless technology was taken in order to be adaptable to the spatial exigency of the band which members are spatially distributed on the stage.

Speakers
avatar for Giovanni Cassanelli

Giovanni Cassanelli

Student, TRIBE-Instruments
I am a Junior Software Developer, graduated at Goldsmiths, University of London.I specialized in Audio Programming,I am currently working as Data Analyst for a growing Record Label (cell-recordings.net).At the same time, I am trying to launch my personal project : TRIBE Loopz (ht... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Bytebeat - fractal self-similarity in algorithmic music
Bytebeat is a music synthesis technique which principle of operation is based on that an audio stream of normally 8-bit PCM samples is generated using a function of a sample index. The function corresponds to a program that employs bitwise (and, or, xor, not) and basic arithmetic operators (add, multiply, subtract, divide, modulo and bit-shifts). The method was discovered by Ville-Matias Heikkilä in 2011. Bytebeat is also sometimes regarded as a new genre of music.

In its purest form bytebeat doesn't use any scores, instruments, oscillators or samples, yet the generated songs show the presence of melody and rhythm which is often complex and polyrhythmic. It could be mysterious how one-line C programs produce such results. In the talk, it will be shown how musical and self-similarity properties of generated waveforms follow from the math properties of bitwise and arithmetic operations.

The further development of bytebeat technique for using it as a control source for various synthesizer parameters like pitch, amplitude, modulation, etc. will be presented. It’ll also be shown that simple formulas could be applied for generating sequences of MIDI notes that could feed any generic synthesizer or sampler.

Speakers
avatar for Alexander Obuschenko

Alexander Obuschenko

Independent
I’m a freelance mobile software engineer. While I’m not working on my client’s jobs, I’m doing audio and graphics programming. One of my recent music projects is a sequencer application with intrinsic support for polyrhythmic and pure intonation music. I would love to talk... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Interactive gestural software interaction developed in GranuRise
The demo will be based on the interactive gestural concepts used in the GranuRise project, which was developed in Max MSP.  You can find more about the project here  http://granurise.com.

The demo is going to be based on the gestural concepts with the addition of some other related topics and concepts such as:
- how to perceive and develop a virtual instrument to act less a software and more like a musical instrument
- as we know almost the whole electronic music is developed in a grid-based system, so an interesting approach presented also in the GranuRise project is how to interact in a more natural non-grid-based way using the gesture implementation
- an additional concept is also how to use a gesture implementation to build a unique sound design without the use of complex matrices and LFO schematics
- GranuRise features also an MPE integration which evolves even further the concept of an expressive gestural control
- The Roli Blocks implementation which offers seamless integration between hardware and software could be also presented as an interesting topic of how to think of software in a more modular way.

Speakers
avatar for Andrej Kobal

Andrej Kobal

Self-employed, Andrej Kobal
I'm a composer, sound designer, Max MSP programmer who is constantly present in Slovenia and Europe in various important sound art installations, costume build multi-media sound solutions and unique live performances. I have build the virtual instrument GranuRise which includes an... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Introducing your robot lead singer
To accompany the talk on singing synthesis (Track 4, Tuesday 19th 12pm), Chris will be on hand to take participants through CereProc's singing system, including:
  • The "Trump" demo - grab a microphone and croon to your heart's content, and prepare to be amazed (horrified) at our beloved POTUS sing right back at you
  • STARS transplant procedure - think your singing is bad? Scientifically confirm this first, before allowing us to switch your "bad" singing features with output from our professional singer model
  • Vox2XML service - convert a set of vocals and lyrics into CereVoice-compatible singing markup script
  • Listen to cutting-edge parametric singing synthesis via CereVoice, harnessing the latest in neural vocoders within speech technology, granting any of our available voices to sing with unparalleled naturalness and expressiveness

This demo session is intended to encourage participants to think about, explore and try out their own ideas of singing interface, as well as be given the opportunity to assess state-of-the-art neural waveform speech and singing synthesis from CereProc's broad range of voices.

Speakers
avatar for Christopher Buchanan

Christopher Buchanan

Audio Development Engineer, CereProc
Chris Buchanan is Audio Development Engineer for CereProc. He graduated with Distinction in the Acoustics & Music Technology MSc degree at the University of Edinburgh in 2016, after 3 years as a signal processing geophysicist with French seismic imaging company CGG. He also holds... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Melodrumatic - Using MIDI notes to pitch-shift via delay
Melodrumatic is an audio plugin that lets you "pitch-shift" via delay to turn unpitched audio into melodies. Controllable via MIDI or mouse :)

Currently available formats: VST3, AU, AAX, Unity

Speakers
avatar for David Su

David Su

Audio Programmer, Bad Dream Games
David Su is a musician, game developer, and researcher. Currently he does audio programming on Bad Dream Games' One Hand Clapping in addition to developing a suite of interactive songs with singer-songwriter-engineer Dominique Star. David recently released Yi and the Thousand Moons... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: MoveMIDI - 3D positional movement interaction with user-defined, virtual interface for music software
MoveMIDI Description
MoveMIDI is a piece of software that allows the user to create digital music using body movements. MoveMIDI communicates with 3D tracking hardware such as the PlayStation Move to translate users’ positional body movements into MIDI Messages that can be interpreted by music creation software like Ableton Live. With MoveMIDI, a user could “air-drum” their drum samples, or they could sweep a filter by sweeping their arm across their body. By using MoveMIDI in a performance context, electronic music performers can convey their actions to an audience through large body movements. Dancers could use MoveMIDI to create music through dance. Since many physical acoustic instruments require spatial movement of they body to play the instrument, the spatial interaction promoted by the MoveMIDI may help users interact with music software similarly to how they interact with acoustic instruments, leveraging preexisting knowledge. This spatial familiarity may also help an audience interpret a performer’s actions.

MoveMIDI allows the user to construct and interact with a virtual 3D instrument interface which can be played by moving the body relative to the virtual interface elements. This virtual interface can be customized by the user in layout, size, and functionality. The current implementation uses a computer screen to display the virtual 3D interface to the user while visualization via head mounted display is in development.

MoveMIDI software won the 2018 JUCE Award and was published as a “Late Breaking Work” paper/poster and interactive demonstration at the ACM CHI 2019 Conference in Glasgow.

See MoveMIDI.com for more information.

Demonstration Outline
The demonstration begins with a 1-2 minute explanation of MoveMIDI. Next, MoveMIDI’s 2 main modes are demonstrated: Hit Mode and Morph Mode. Hit Mode allows a user to hit virtual, spherical trigger points called Hit Zones. When Hit Zones are hit, they tigger musical notes or samples by sending MIDI Message signals. Morph Mode allows a user to manipulate many timbral characteristics of audio simultaneously by moving their arms within a predefined 3D Morph Zone. Movements in this zone send different MIDI Control Change messages per 3D axis. Next, a 1 minute performance using MoveMIDI is given. Finally, audience members are invited to voluntarily try MoveMIDI for themselves. A volunteer will be given the handheld controllers and may experiment and create music. This demonstration process will repeat as attendees move to further demonstrations.

Speakers
avatar for Tim	Arterbury

Tim Arterbury

CEO, TesserAct Music Technology LLC / Baylor University
I am a graduate student pursuing a Master's degree in computer science and researching human-computer interaction with music software. My latest project, MoveMIDI, uses in-air body movements to control music software. See more at MoveMIDI.com. I do a lot of C++ and JUCE coding and I have a passion for making music... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

DEMO: Multipurpose Rotary Knob with Torque-Feedback
Hands-on demo of a knob controller for smart human-machine interaction applications. 
You could try it yourself manipulating audio parameters of a mobile VJ software and customizing your preferred haptic responses.

Speakers
avatar for Francesco Martina

Francesco Martina

Hardware Designer, CERN
Francesco studied Electronics Engineering at the University of Pisa and Scuola Superiore Sant'Anna (Pisa, Italy). Since April 2017 he is employed in the Beam Instrumentation group of CERN (Geneva) under the supervision of Dr. Christos Zamantzas, working on the mixed-signal design... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Sound Control - Enabling fast and flexible musical interface creation for disability
This will be a hand-on demo on how to use the Sound Control software. Whilst it is designed primarily to meet the musical needs of children and young people with profound and multiple learning disabilities, the software can be used by anyone to make music, and be expressive in doing so, from keen amateurs through to seasoned professionals. 

There are a range of input control devices to choose from (e.g. legacy PS2 golf-game-controllers, motion sensors/BBC Micro bit etc.) and the software can produce musical outputs ranging from digitally synthesised sounds, use and transformation of samples, as well as MIDI capability to connect with commercial software and DAWs.

Simon will be speaking further about this 15.00 - 15.25 in track 3 (CMD).  

Speakers
avatar for Simon Steptoe

Simon Steptoe

Musical Inclusion Programme Manager, Northamptonshire Music and Performing Arts Trust
I currently work as Musical Inclusion Programme and Partnership Manager at Northamptonshire Music and Performing Arts Trust, which is the lead for the Music Education Hubs in Northamptonshire and Rutland.My role is to set up projects with children and young people in challenging circumstances... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

14:00 GMT

Panel: Mobile vs desktop software: Fragmented or converging practices?
(Moderator) Matthew Fecher is the co-founder of AudioKit Pro
Gaz Williams is a Music Technologist and Journalist at Sonic State
Henny tha Bizness is a 4x Grammy Producer, YouTuber, and Professor
Matt Derbyshire is the ex-Head of Product & Marketing at Ampify, Focusrite, and Novation
Henrik Lenberg is the founder of Auxy

Musicians move fast. 
A whole new generation are making #1 hits and massive records without a laptop. 

Is this a fad? Should desktop plugin developers focus more on Mobile? 
Or, with new data coming out – Should mobile developers focus more on desktop?

Beyond speculation, this panel will reveal:
* Usage data and trends from popular desktop vs mobile versions
* Valuable insights from journalists and Youtubers who use thousands of plugins & apps
* New and innovative ways producers are using apps and plugins

Speakers
avatar for Henrik Lenberg

Henrik Lenberg

Founder, Auxy
Auxy is on a mission to put a studio in everyone's pocket and push music forward as an artform.
avatar for Matt Derbyshire

Matt Derbyshire

Independent
avatar for Matthew Fecher

Matthew Fecher

Co-Founder, AudioKit Pro
2019  Apple "Editor's Choice" Winner. Grateful to be making music software and empowering hundreds of iOS & Mac apps with the AudioKit framework. Additionally, Matthew is the creator of the free & open-source iOS Synthesizer, AudioKit Synth One.
GW

Gaz Williams

Journalist, Sonic State
avatar for Henny Tha Bizness

Henny Tha Bizness

Producer, Westview Dr
Henny Tha Bizness, is a Grammy Award winning, Multi-Platinum Music Producer, Songwriter, Professor, Cinematographer and Youtuber. Henny has produced numerous hits for various recording artists including Drake, Chris Brown, Kendrick Lamar, J Cole, Lil Wayne, Jay Z, 50 Cent, Black... Read More →


Tuesday November 19, 2019 14:00 - 14:50 GMT
Auditorium Puddle Dock, London EC4V 3DB

14:00 GMT

Audio APIs & driver models : under & above the hood
Audio software developers today have a multitude of programming interfaces available for playing back and recording audio on Desktops, Tablets, and Smart Phones.

These software interfaces ultimately provide a way to send/receive audio to/from hardware audio interfaces. This involves software abstraction layers, operating system layers, device driver layers, and hardware abstraction layers.

This talk aims to provide an overview of these layers and some details of how they interact with each other on different operating systems with different driver models.

The goal for the talk is to help audio programmers develop a sound understanding of various options available, and to understand how various layers interact under the hood. This understanding would help the developers choose the right software interface for their requirements with respect to latency, spatialization, performance, mixing, sample rate and format conversion, etc…

It would also help developers to be able to identify, diagnose, and fix problems in the audio pipeline. It would further enable them to extend the pre-existing libraries and wrappers for features that are not yet exposed by the libraries and wrappers.

Given the breadth of this topic, this talk will not attempt to dive deep, but it can become a starting point for developers who want to explore further.

Speakers
avatar for Devendra Parakh

Devendra Parakh

VP Software Development, Waves Inc.
Devendra Parakh is the VP, Software Development at Waves. He has been writing device drivers and applications for desktop and embedded platforms for more than twenty-five years. Most of his work in the past decade has been with Audio - both Professional and Consumer audio. In addition... Read More →


Tuesday November 19, 2019 14:00 - 14:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Porting the Hierarchical Music Specification Language (HMSL) to JUCE
See an experimental music system from the 1980's running on today's laptops.
HMSL is a language and interactive tool for music composition and performance. It was recently resurrected using JUCE. The author will describe features of the language and walk through many examples. The author will also discuss the problems encountered and solved during the port to JUCE.

HMSL features abstract multi-dimensional shapes, live-coding, MIDI tools, algorithmic composition utilities, score entry dialect, and a hierarchical scheduler. It also supports a cross-platform GUI toolkit and several editors.
Typical HMSL pieces might involve:
  • hyper-instruments with controls for harmonic complexity or density
  • MIDI ring networks for ensembles
  • dynamic just-intonation using a precursor to MPE
  • complex polyrhythms
  • algorithmic real-time development of a theme
  • real-time audio using a Motorola DSP 56000

HMSL is a set of object oriented extensions to the Forth language.


Speakers
avatar for Phil Burk

Phil Burk

Staff Software Engineer, Google Inc
Music and audio software developer. Interested in compositional tools and techniques, synthesis, and real-time performance on Android. Worked on HMSL, JForth, 3DO, PortAudio, JSyn, WebDrum, ListenUp, Sony PS3, Syntona, ME3000, Android MIDI, AAudio, Oboe and MIDI 2.0.


Tuesday November 19, 2019 14:00 - 14:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Creator Tools: Building Kontakt instruments without Kontakt
There aren't many software instruments that are as widely used 17 years after their first version as Kontakt. In this talk, we will briefly go through its history, focusing on how a strong community of content creators has been a central component of its success. From early sample libraries, to increasingly advanced instruments, it is the content hosted in Kontakt that makes so many musicians, producers and composers return to it, year after year.  

After identifying some key milestones in its evolution, we will dive deeper into its present and future, and the role the Creator Tools play in it. And going hands-on is the best way to do this, so we won't be shy about building a little something on stage.


Speakers
avatar for Dinos Vallianatos

Dinos Vallianatos

Product Owner, Authoring Platforms, Native Instruments


Tuesday November 19, 2019 15:00 - 15:25 GMT
Auditorium Puddle Dock, London EC4V 3DB

15:00 GMT

No tracks or graphs? Designing sound-based educational audio workstations.
The Compose with Sounds project was set up by a network of academics and teachers across the EU with the goals of increasing exposure to sound-based/electroacoustic practice in secondary schools and creating provisional tools with supporting teaching materials to further enhance usage and exposure to music technology amongst teenagers.  This talk will present two large software tools that have been designed as part of this ongoing project. A new digital audio workstation entitled Compose with Sounds (CwS), alongside a networked environment for experimental live performance, Compose with Sounds Live (CwS Live). Both of these are scheduled for free distribution in late 2019.
Unlike traditional audio workstations that are track or graph based, these tools and the interactions within are based on sound-objects.  This talk will present the trials and tribulations of developing these tools and the complex technical and UX dichotomies that emerged when utilising academics, teachers and students as active components in the development process.
Audio software tools designed to run in school classrooms (where the likelihood of access to high powered computers is small) require careful audio and UX optimisations so that they act as stepping stones to more industrial workstations.  The talk will discuss a collection of the unique audio optimisations that had to be made to enable the creation of these tools while maintaining minimal audio latency and jitter. Alongside this, it will present various approaches and concessions that had to be made to empower students to move onto more traditional track-based workstations. 

The talk will be broken down into four sections: exploring the requirements of pedagogical audio tools;  designing an approachable UX for sound-based music; audio optimisations required to enable true sound-based interactions;  the dichotomies of designing sound-based tools that empower users to subsequently utilise track or graph-based workstations. 



Speakers
avatar for Stephen Pearse

Stephen Pearse

Senior Lecturer, University of Portsmouth
C++ Audio Software engineer specialising in the creation of educational musical tools and environments.


Tuesday November 19, 2019 15:00 - 15:25 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

15:00 GMT

Sound Control: Enabling fast and flexible musical interface creation for disability (with demo)
This is an opportunity to learn about the story behind the development of the Sound Control software, a unique collaboration between computer music researcher and musician, Dr Rebecca Fiebrink and her team (at the time based at Goldsmiths London), specialist musicians and special needs music educators from Northamptonshire, and one of the country's largest music education Hubs, The Northamptonshire Music and Performing Arts Trust.

The Sound Control software will be demonstrated as part of the session. The software has been designed primarily to meet the musical needs of children and young people with profound and multiple learning disabilities, and the evolution of the sofrware into its present form has been profoundly influenced by these young people. 

Researcher, Sam Parke-Wolff. who worked with Dr Fiebrink on writing the software will be on hand to talk about technicalities and the machine-learning algorithms undering the program and its unique operation.

Speakers
avatar for Simon Steptoe

Simon Steptoe

Musical Inclusion Programme Manager, Northamptonshire Music and Performing Arts Trust
I currently work as Musical Inclusion Programme and Partnership Manager at Northamptonshire Music and Performing Arts Trust, which is the lead for the Music Education Hubs in Northamptonshire and Rutland.My role is to set up projects with children and young people in challenging circumstances... Read More →


Tuesday November 19, 2019 15:00 - 15:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

The many ways to play audio on the web
Modern browsers provide a variety of ways to play and create audio these days. As developers we have a lot of APIs to choose from. This talk is giving an overview about all the available APIs out there, what primary use case they have, how they can be combined, and whether or not they can be used already across all browsers.

For all of us who mostly know APIs like the Web Audio API as a compile target for DSP languages like FAUST or SOUL this talk will outline how those APIs work internally. What is for example the advantage of an AudioWorklet when compared to a ScriptProcessorNode? Why would one use an audio element with the Media Source Extensions API? How can external signals be consumed and sent out via WebRTC? Which browsers support the Web MIDI API to send and receive MIDI messages to and from connected devices? How can all that be kept in sync with the TimingObject? And the list goes on with many more APIs that exist to play, create, or control media on the web. 

Of course all of those APIs have a different kind of maturity and vary a lot when it comes to browser support. This talk aims to give an overview about the current state of audio on the web. It also tries to show how we as a community can get involved in the process of creating and updating all those APIs.

Speakers
avatar for Christoph Guttandin

Christoph Guttandin

Web Developer, Media Codings
I'm a freelance web developer specialized in building multi media web applications. I'm passionate about everything that can be used to create sound inside a browser. I've recently worked on streaming solutions and interactive music applications for clients like TV stations, streaming... Read More →


Tuesday November 19, 2019 15:00 - 15:25 GMT
Lower River Room Puddle Dock, London EC4V 3DB

15:30 GMT

Coffee break
Tuesday November 19, 2019 15:30 - 15:50 GMT
The Mermaid

15:30 GMT

Poster: Embodied interaction with sound
Our poster discusses the use of gestural body movement for interaction with music software. While interaction with music software and virtual instruments has traditionally involved using Graphical User Interfaces (GUIs) and/or hardware MIDI controllers, we discuss the unique interactions and interfaces found in recent body movement controllers. Some of these controllers have been designed with the intention of being used in a performance context to provide visualization for an audience of the performer’s actions on the music. Another goal of these controllers is to ensure a high degree of expression during their use. Some of these controllers are used to make music creation more accessible for all people. The idea of body motion based interaction with electronic instruments is far from new, with inventions such as the Theremin existing since 1920. Today, interfaces that allow the use of mid-air body gestures to control music applications continue to become available such as the MI.MU Gloves, Wave MIDI Ring, Xbox Kinect, and the Leap Motion Controller. These interfaces are often accompanied by software tools enabling the user to design their custom gestural interactions with audio parameters.

Speakers
avatar for Balandino Di Donato

Balandino Di Donato

Lecturer in Creative Computing, University of Leicester
avatar for Tim	Arterbury

Tim Arterbury

CEO, TesserAct Music Technology LLC / Baylor University
I am a graduate student pursuing a Master's degree in computer science and researching human-computer interaction with music software. My latest project, MoveMIDI, uses in-air body movements to control music software. See more at MoveMIDI.com. I do a lot of C++ and JUCE coding and I have a passion for making music... Read More →


Tuesday November 19, 2019 15:30 - 16:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

15:50 GMT

Panel: Raising money and selling your company (or project or code)
Do you wonder how to launch a project or company? What are the pros and cons of Kickstarter or other crowdfunding campaigns? What if you have a company you want to launch -- and an idea for a successful app -- but need to raise money? Or you want to become the best in breed as a software development house and need additional funding? Then if your company or project is rolling smoothly, what’s the exit strategy? Come to this panel of lawyers, audio company CEOs, developers and finance specialist. We’ll demystify different forms of financing, including grants, friends and family, venture capital and private equity and debt. We will also provide a “real life” example of a successful negotiation for a strategic partnership and leave you with practical tips to jumpstart your next project.

Speakers
TE

Tim Exile

Founder, Endlesss
avatar for Phil Dudderidge

Phil Dudderidge

Executive Chairman, Focusrite plc
Phil founded Focusrite Audio Engineering Ltd in 1989 to purchase the business assets of Focusrite Ltd, a company founded by Rupert Neve in 1985. He led the acquisition by Focusrite of Novation (synths and keyboards) in 2004. Currently the group has c.$100m. revenue per annum and continues... Read More →
avatar for Francine Godrich

Francine Godrich

General Counsel, Focusrite
Francine is General Counsel (i.e. does lots of legal stuff) and Company Secretary (i.e. deals with corporate governance administration) to one of the world’s most passionate and prestigious Music Tech plcs. Francine spends a lot of her time looking at how Focusrite plc can grow... Read More →
avatar for Zain Qazi

Zain Qazi

Business Controller, ROLI
Zain is the Business Controller at ROLI. He has been with the company since its seed round funding and has seen and helped the company grow from a three-person company to one with four subsidiaries. He has been involved in the group's fundraising, acquisitions, bridge-financing and... Read More →
avatar for Heather Rafter

Heather Rafter

Principal, RafterMarsh US


Tuesday November 19, 2019 15:50 - 16:40 GMT
Lower River Room Puddle Dock, London EC4V 3DB

15:50 GMT

Introduction to MIDI 2.0
What is MIDI 2.0, anyway? That is the mission of this session. We'll explain the current state of MIDI 2.0 specifications, and provide new detail of specifications to be completed soon. This will include brief reviews of MIDI-CI, Profile Configuration and Property Exchange. The focus will be the new MIDI 2.0 Protocol, with some details of the MIDI 2.0 packet and message designs and how MIDI-CI is used to achieve maximum interoperability with MIDI 1.0 and MIDI 2.0 devices. There will be little time for Q&A, but we'd love to talk shop with you at the MIDI table in the main hall.
The presenters are key architects of the MIDI 2.0 specifications in the MIDI Manufacturers Association.

Speakers
avatar for Mike Kent

Mike Kent

Owner, MK2 Image Ltd.
Mike is a synthesizer geek. Mike works as an independent consultant specializing in product development for audio, musical instruments, professional audio/video, and USB. Mike worked for Roland in R&D for 22 years. He has also contributed to audio or music projects for Yamaha, Apple... Read More →
avatar for Brett Porter

Brett Porter

Lead Engineer, Audio+Music, Art+Logic
Brett holds a B.M. in Composition and M.M. in Electronic/Computer Music from the University of Miami Frost School of Music. At Art+Logic since 1997, he's worked on custom software development projects of all kinds but prefers to focus on the pro audio and MI world.
avatar for Florian Bomers

Florian Bomers

Founder, Bome Software
Will translate MIDI for food. Florian Bömers has been using MIDI since the mid-80s and started programming audio and MIDI applications already in his childhood. Now he manages his company Bome Software, which creates standard software and hardware solutions for MIDI translation... Read More →


Tuesday November 19, 2019 15:50 - 16:40 GMT
Auditorium Puddle Dock, London EC4V 3DB

15:50 GMT

Offloading audio and ML compute to specialised low power SoCs
Likely audience
System designers, integrators and algorithm developers interested in building battery-powered audio and sensor hardware ranging from speaker boxes and mainstream consumer electronics, to toys, musical instruments and controllers.

Abstract
Audio, speech and gestural interfaces have become ubiquitous on mobile and wireless devices. This has driven the development of innovative low-power DSPs, that offer real-time vector and matrix math, specialised for audio and sensor DSP/ML processing, at a fraction of the battery power consumption.

This talk will cover different perspectives on offloading power-hungry real-time processing from the application processor, to a highly energy-efficient SoC (System on Chip).

Speakers
avatar for Vamshi Raghu

Vamshi Raghu

Senior Manager, Knowles
Currently lead the developer and integrator experience engineering teams at Knowles Intelligent Audio.In past lives, I helped create music games at Electronic Arts, game audio authoring tools at Audiokinetic, and enabled audio experience features for early stage startups.


Tuesday November 19, 2019 15:50 - 16:40 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:50 GMT

Run fast, sound great: Keys to successful voice management systems
An excessive number of sound sources (or voices) is a common problem faced in game audio development. This is especially true in complex, open world games when the player can travel anywhere in world and interact with thousands of entities. Voice count quickly becomes a problem that must be addressed, but keeping the audio aesthetic intact is a real challenge!

Frontier Development’s lead audio programmer, Will Augar, shows how the company’s voice management technology has evolved and gives a detailed insight into the latest system. The team’s focus on software engineering best practices, such as separation of concerns and “don’t repeat yourself” gives developers a great example of how to tackle complex systems with strict performance requirements.

This talk aims to show how audio programmers how they can develop performant voice management that do not compromise on mix quality.

Speakers
avatar for Will Augar

Will Augar

Principal Audio Programmer, Frontier Developments plc
Will Augar is the lead audio programmer at Frontier. He been a professional software developer for over 10 years working in both the games and music industries. He was an audio programmer at FreeStyleGames working on the DJ Hero series and has worked on Akai MPC at inMusic. In recent... Read More →


Tuesday November 19, 2019 15:50 - 16:40 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

16:45 GMT

Keynote: Reshaping live performance with MI.MU
In this engaging and immersive session, Imogen will demonstrate the astonishing power and impact of MI.MU gloves. Making use of a completely customisable series of gestures - from finger movements, to hand postures, to arm motion - MI.MU Gloves give artists total control over their musical performances without the need for wires or other equipment. With a six-hour battery life and low-latency wifi communication, through bespoke MI.MU ‘Glover’ software, performers can map gestures to any midi-enabled music, lighting or vjing software, with Artificial Intelligence (AI) allowing them to mix and customise gestures into near-limitless combinations. MI.MU was born out of a desire to use the complex and intuitive movement of the human body to free performers from traditional musical set ups. Imogen will demonstrate how one can better engage the audience with more dynamic and visual performances, liberated from laptops, keyboards and controllers.


Speakers
avatar for Imogen Heap

Imogen Heap

Imogen Heap


Tuesday November 19, 2019 16:45 - 18:15 GMT
Auditorium Puddle Dock, London EC4V 3DB

18:15 GMT

VENUE CLEARED
We will be clearing the venue so we can set up the live show which will start at 8pm (20.00 hours).  An evening meal will not be served this night.  

Tuesday November 19, 2019 18:15 - 18:30 GMT
The Mermaid

18:15 GMT

Women in Audio Social Meet Up
Tuesday November 19, 2019 18:15 - 20:30 GMT
All Bar One, Ludgate Hill, EC4M 7DE
 
Wednesday, November 20
 

09:00 GMT

Keynote: Making Music with AI: Challenges and Results
Artificial intelligence techniques are increasingly used for music creation. Several music albums have been produced in which substantial parts of the music was generated by AI. Francois will give an overview of recent results in this domain, emphasising the inherently ill-defined nature of music creation, and illustrate his talk with a few recent projects in this area.

Speakers
avatar for Francois Pachet

Francois Pachet

Director, Spotify
François got his Ph.D. and Habilitation from Université Pierre et Marie Curie (UPMC), after an engineer’s degree from Ecole des Ponts et Chaussées. He has been assistant professor in artificial intelligence at UPMC until 1997 when he joined SONY Computer Science Laboratory P... Read More →


Wednesday November 20, 2019 09:00 - 10:00 GMT
Auditorium Puddle Dock, London EC4V 3DB

10:00 GMT

Coffee break
Wednesday November 20, 2019 10:00 - 10:30 GMT
The Mermaid

10:00 GMT

Poster: Using dynamic time warping to improve the classical music production workflow
The current music production workflow, comprising recording, editing, mixing, and mastering music, requires a great deal of manual work for the sound engineer. This process needs to be streamlined and there exist technologies that can do this. This poster presents a project that aims to bring some recent advances in Music Information Retrieval (MIR) techniques to music production tools in order to bridge this gap. The work presented here comes from a Master’s thesis project from the Music Technology Lab at the Massachusetts Institute of Technology (MIT).

The goal of this project is to explore all areas in the music production workflow (with a focus on classical music) that could benefit from digital signal processing-based tools, to build and iterate on these tools, and to transform the tools into products that are beneficial and easy to use. We collaborated with the Boston Symphony Orchestra (BSO) sound engineers to gather requirements for the project, which led to the identification of two potential tools: an automatic marking transfer (AMT) system and an audio search (AS) system. We have since then collaborated with other potential users for both AMT and AS tools, including sound engineers from radio stations in the Boston area. This has enabled us to identify additional workflows and finalize requirements for these tools. Based on these, we have created standalone applications for AMT and AS.

This poster will share the motivation for our work, the technical details of the design, implementation, and evaluation of AMT and AS, a demonstration of the tools, and future directions for this work.

Speakers
avatar for Smriti Pramanick

Smriti Pramanick

Student, Massachusetts Institute of Technology
I am a recent graduate of the Massachusetts Institute of Technology (MIT), where I majored in computer science and minored in music (BS ’18, MEng ’19). I am interested in the intersection of music and technology, particularly how we can use technology to improve our musical experiences... Read More →


Wednesday November 20, 2019 10:00 - 10:30 GMT
Newgate Room Puddle Dock, London EC4V 3DB

10:30 GMT

Developing a rich, cross-platform consumer experience for ROLI LUMI with JUCE, React Native and Unity
ROLI’s LUMI app aims to provide an attractive, fun, consumer-friendly way to learn how to play music in conjunction with the LUMI Keys keyboard.

A key challenge in developing the app was to rapidly develop and iterate on a user interface with game-quality visuals and slick native navigation, while harnessing the power of JUCE for the audio engine and making the app cross-platform across iOS and Android.

In this talk, the app’s lead developer will discuss how the team combined JUCE (with C++), React Native (with Javascript) and Unity (with C#) to achieve these goals.

The talk will cover:
  • An introduction to the LUMI app
  • A high-level overview of how each part of the tech stack works and the reasons for choosing it
  • Detail on how the three components co-exist, interface and interact with each other, both at a conceptual level and at a code level, with a particular focus on how JUCE and React Native are integrated
  • A discussion of the pros and cons of the overall approach, and learnings from the project

Speakers
avatar for Tom Duncalf

Tom Duncalf

Senior Software Engineer, Independent
Software developer currently leading development of the LUMI app for ROLI, using React Native, JUCE and Unity.


Wednesday November 20, 2019 10:30 - 11:20 GMT
Lower River Room Puddle Dock, London EC4V 3DB

10:30 GMT

Real-time 101 - part I: Investigating the real-time problem space
“Real-time” is a term that gets used a lot, but what does it really mean to different industries? What happens when our “real-time” system doesn’t perform in real-time? And how can we ensure that we don’t get in to this situation?
This talk aims to discuss what we mean by a real-time system, the practices that can be used to try and make sure it stays real-time and in particular how these can be subtly or accidentally abused increasing the risk of violating your real-time constraints.
We’ll take a detailed look at some of the considerations for real-time systems and the costs they involve such as system calls, allocations and priority inversion. Then the common tools in a programmer’s box such as mutexes, condition variables and atomics, how these interact with real-time threads and what costs they can incur.
This talk aims to ensure attendees of all experience levels leave with a solid understanding of the problems in the real-time domain and an overview of the tools commonly used to solve them.


Speakers
avatar for Dave Rowland

Dave Rowland

Software Developer, Tracktion
Dave Rowland is the director of software development at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio... Read More →
avatar for Fabian Renn-Giles

Fabian Renn-Giles

Software engineer, Fielding DSP Ltd.
Fabian Renn-Giles (PhD) is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Up until recently he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands... Read More →


Wednesday November 20, 2019 10:30 - 11:20 GMT
Auditorium Puddle Dock, London EC4V 3DB

10:30 GMT

Support of MIDI2 and MIDI-CI in VST3 instruments
Abstract:The recent extensions of the MIDI standard, namely MIDI 2.0 and MIDI CI (Capability Inquiry), generate many opportunities to develop hardware- and software-products, that excel previous products in terms of accuracy, expressiveness and convenience. While things should become easier for the users, the complexity of supporting MIDI as a developer will be significantly increased. In this presentation we will give a brief overview over these new MIDI extensions to then discuss, how these changes are reflected in the VST3 SDK and what plugin-developers need to do to make use of these new opportunities. Fortunately, many of these new capabilities can be supported with little to no effort, due to the design principles and features of VST3 which will be discussed, also. We may also briefly touch questions regarding support of these new MIDI capabilities from the perspective of hosting VST3 plugins. The presentation will start with giving short overviews over MIDI 2.0, MIDI-CI and VST3, to then dive into each specific MIDI extension to put it into context of the related concepts in VST3. This we will start with MIDI 2.0 – Per Note Controllers & VST3 – Note Expression, then we’ll look into MIDI 2.0 – Pitch handling methods and compare it to VST3. After that several further areas like
  • MIDI 2.0 - increased resolution
  • MIDI 2.0 - Channel groups
  • MIDI-CI - Program Lists
  • MIDI-CI - Recall State
will be put in context with VST3.
The presentation will be held by two senior developers of Steinberg, that have many years of experience in supporting and contributing to VST and in supporting MIDI inside the software products of Steinberg, especially Cubase and Nuendo.

Presenters:
  • Arne Scheffler is working as senior developer at Steinberg for 20 years in several areas. He is the main contributor to the cross-platform UI Framework VSTGUI.
  • Janne Roeper is working as senior developer for Steinberg since more than 20 years especially in the area of MIDI support in Cubase and Nuendo.
Both contribute to the VST specification since VST 2 and love making music.

Speakers
avatar for Janne Roeper

Janne Roeper

Software Developer, Steinberg
My interests are making music together with other musicians in realtime, music technology, especially expressive MIDI controllers, programming, composing, yoga, meditation, piano, keyboards, drums, bass and other instruments, agile methodologies, computers and technology in general... Read More →
avatar for Arne Scheffler

Arne Scheffler

Software Developer, Steinberg
I'm working at Steinberg for 20 years now and use Cubase since 30 years. I'm the maintainer and main contributor of the open source VSTGUI framework. If you want to know anything about VSTGUI, Cubase or Steinberg talk to me.


Wednesday November 20, 2019 10:30 - 11:20 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Building game audio plugins for the Unity engine
Unity (developed by Unity Technologies) is probably known as one of the most used and popular game engines in the recent years. It allows developing cross-platform 2D and 3D games and multimedia applications easily, which is one of its main strengths. Since Unity version 5, its Native Audio plugin SDK permits creating game audio plugins (DSP effects, synthesizers, etc) for extending the available range of factory effects that are bundled with the engine. But this is not the only way. This talk will cover some of the different approaches available for creating such plugins, how to get started, and it will also explain some details about the development of Voiceful Characters from Voctro Labs, a native plugin for Unity that allows synthesizing various types of voices using AI technology, with fully configurable parameters like timbre, speaking speed, gender or emotion.

Speakers
avatar for Jorge Garcia

Jorge Garcia

Software Consultant, Independent
I am an audio software consultant with more than 10 years of experience working in games, professional audio, broadcast and music. In this time I have participated in projects for MIDAS/Behringer, Codemasters, Activision, Mercury Steam and Electronic Arts as well as various record... Read More →


Wednesday November 20, 2019 11:30 - 11:55 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

11:30 GMT

Hybridizing FAUST and SOUL
 FAUST
FAUST (Functional Audio Stream) is a functional programming language for sound synthesis and audio processing, working at sample level, and with a strong focus on the design of synthesizers, musical instruments and audio effects. The core component of FAUST is its compiler, which translate any FAUST digital signal processing (DSP) specification to a wide range of non-domain specific languages such as C++, C, JAVA, LLVM bit code or WebAssembly. Thanks to a wrapping system called "architectures" codes generated by FAUST can be easily compiled into a wide variety of objects ranging from audio plugins to standalone applications or smartphone and Web applications.
 SOUL
The SOUL (SOUnd Language ) platform is a language and an API. The language is a small, carefully crafted DSL for writing the real-time parts of an audio algorithm. The API is designed to deploy that SOUL code to heterogeneous CPUs and DSPs, both locally and remotely. The SOUL language is secure by design and can be safely executed on remote devices, inside drivers, or inside bare-metal or realtime kernels, even without a sandbox. SOUL programs  are structured in a graph-like form. They can be JIT compiled and dynamically redeployed onto target processors with different parallelisation characteristics. 
Hybridizing FAUST and SOUL
Both approaches share common ideas: sample level DSP computation, fixed memory and CPU footprints, dynamic JIT compilation, CPU efficiency, multi-targets deployment (native and embedded platforms, web...).
After a possible Brexit, should each language and its developer community remain on their own territory? We do not think so: each approach has its advantages and disadvantages. Depending of the needs, some programmers prefer the imperative SOUL approach, others prefer the more declarative FAUST mathematical specification.

I will show how the two languages can be used together, and even possibly « hybridized », thanks to several tools developed this year with a close collaboration with SOUL developers: the Faust => SOUL backend now part of the Faust compilation chain, and several tools to help combining the two languages. Several working examples will then be demonstrated, during this 25 mins session, as well as during the "Build a synth with SOUL" worskop.


Speakers
avatar for Stéphane Letz

Stéphane Letz

Researcher, GRAME
Researcher at GRAME-CNCM in Lyon, France. Working on the Faust Audio DSP language and eco-system.


Wednesday November 20, 2019 11:30 - 11:55 GMT
Lower River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Live musical variations in JUCE
In this talk I will give insight in to my recent works with Icelandic artists Björk and Ólafur Arnalds. 
Together with them I have worked on creating plugins that are used in their live performances, manipulating both audio and MIDI. I will give a quick demonstration of how the plugins work and also describe the process of working with artists to bring their ideas to life, in JUCE code. 

From idea to prototype
How to take a broadly scoped idea and deliver a working prototype. I present the approaches taken in iterating on ideas with artists. When working on software in an artistic context, descriptions can often be very vague, or requirements hard to understand. But instead of just saying "No, that's not possible", it is possible to take a step back and look differently at the problem by emphasizing on what the outcome should look or sound like and work your way towards it in a practical way - without compromising the originality of the idea.

Speeding up the process
Integrateing freely available 3rd party libraries such as aubio and Faust with JUCE for fast idea validation and prototyping was essential in this project. I needed rudimentary pitch shifting, onset and pitch detection. Not haveing the resources to implement them myself before the deadline, I chose to use aubio and Faust with great results.

Speakers
avatar for Halldór Eldjárn

Halldór Eldjárn

Audio developer, Inorganic Audio
I'm an Icelandic musician and a programmer. I write music, and work with other artists on creating new ways of expression. My main interest is augmenting creativity with technology, in a way that inspires the artist and affects her work.


Wednesday November 20, 2019 11:30 - 11:55 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Real-time 101 - Part II: The real-time audio developer’s toolbox
“Real-time” is a term that gets used a lot, but what does it really mean to different industries? What happens when our “real-time” system doesn’t perform in real-time? And how can we ensure that we don’t get in to this situation?
This talk is presented in two parts. This is the second part which takes an in-depth look at the difficult problem of synchronization between real-time and non-real-time threads. This talk will share insights, tricks and design patterns, that the author has established over years of real-time audio programming, and has ultimately led to the creation of the open-source farbot library. At the end of this talk, you will be equipped with a set of simple design rules guiding you to the correct solution for various real-time challenges and synchronization situations.


Speakers
avatar for Dave Rowland

Dave Rowland

Software Developer, Tracktion
Dave Rowland is the director of software development at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio... Read More →
avatar for Fabian Renn-Giles

Fabian Renn-Giles

Software engineer, Fielding DSP Ltd.
Fabian Renn-Giles (PhD) is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Up until recently he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands... Read More →


Wednesday November 20, 2019 11:30 - 12:30 GMT
Auditorium Puddle Dock, London EC4V 3DB

12:05 GMT

"Did you hear that?" Learning to play video games from audio cues
The aim of this talk would be to introduce an exciting new area of research and seek interest from audio developers to join the project and offer technical expertise on the subject. This has been recently accepted for publication at the IEEE Conference on Games, research paper available here.

Talk abstract:

In this talk I will describe work in progress regarding an interesting direction of game-playing AI research: learning to play video games from audio cues only. I will highlight that current state-of-the-art techniques rely either on visuals or symbolic information to interpret their environment, whereas humans benefit from the processing of many other types of sensor inputs. Sounds and music are key elements in games, which not only affect player experience, but gameplay itself in certain scenarios. Sounds within games can be used to alert the player to a nearby hazard (especially when in darkness), inform them they collected an item, or provide clues for solving certain puzzles. This additional sensory output is different from traditionally visual information and allows for many new gameplay possibilities. 

Audio design in games also raises some important challenges when it comes to inclusivity and accessibility. People who may be partially or completely blind rely exclusively on audio, as well as some minor haptic feedback, to play many video games effectively. Including audio as well as visual information within a game can make completing it much more plausible for visually impaired players. Additionally, individuals with hearing difficulties would find it hard to play games that are heavily reliant on sound. Intelligent agents can help to evaluate games for individuals with disabilities: if an agent is able to successfully play a game using only audio or visual input, then this could help validate the game for the corresponding player demographics. 

Speakers
avatar for Raluca Gaina

Raluca Gaina

PhD Student, Queen Mary University of London
I am a research student interested in Artificial Intelligence for game playing. I'm looking to have conversations about game-playing AI using audio input (as opposed to, or complimenting, traditional visual or symbolic input), with regards to accessibility in games.


Wednesday November 20, 2019 12:05 - 12:30 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

12:05 GMT

Blueprint: Rendering React.js to JUCE
Blueprint is a hybrid C++/JavaScript library and JUCE module that lets you build native JUCE apps and audio plugins using React.js. This talk will introduce and demonstrate Blueprint by first briefly introducing React.js and explaining how it works under the hood, then discussing how Blueprint can leverage those inner workings to provide a juce::Component backend to React. We'll compare with alternative approaches to introducing web technologies to the JUCE stack, such as React Native and Electron, to show how Blueprint can offer a more lightweight, flexible, and familiar way of working with JUCE while leveraging the power and speed of React.js.

Speakers
avatar for Nick Thompson

Nick Thompson

Software Developer, Syng
Nick Thompson is the founder and developer of Creative Intent, where he's released three audio plug-ins, Temper, Tantrum, and Remnant. The latter showcases Blueprint, a new user interface library he developed that brings the power and flexibility of React.js to JUCE. He recently joined... Read More →


Wednesday November 20, 2019 12:05 - 12:30 GMT
Upper River Room Puddle Dock, London EC4V 3DB

12:05 GMT

How to prototype audio software
Prototyping your software before you build it for real saves you a lot of development time if you do it at all, and a lot more if you do it right. 
Making prototypes is helpful in most areas of design but it’s particularly important in music software because users need your interface to be as intuitive as possible to be able to work with it creatively to express themselves.
This talk will show you why prototyping is so important, how to do it, and what kinds of tools to use to make sure your users have the best possbile experience using your software!


Speakers
avatar for Marek Bereza

Marek Bereza

Director, Elf audio
Marek is an interaction designer and computer scientist from London, specializing in real-time audio and visuals. His work has found its way into interactive installations, art galleries, Super Bowl adverts, broadcast tv, live music performances, mobile apps and music videos. Previously... Read More →


Wednesday November 20, 2019 12:05 - 12:30 GMT
Lower River Room Puddle Dock, London EC4V 3DB

12:30 GMT

Careers Fair
If you're looking for the next step in your career, please take the opportunity to sit down with companies who are actively recruiting.  Apple, Syng and Moodelizer want to meet you!  Grab some lunch and head to the exhibition area nearest to the stairs.    

Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Lunch
Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Per-sample C: a live-coding environment for learning audio DSP in C/C++
C++ is the language of choice of audio developers and the industry. Its power flexibility and speed serve us well once we invest the time and effort to learn it. While many projects and technologies help to lower the barrier for entry or ease the learning curve, C++ remains very challenging for newcomers, especially for those aiming to learn audio DSP in terms of C++.

Per-Sample C is a C++ compiler environment for new students of audio programming. Based on LLVM/clang and (optionally) the Tiny C Compiler, it compiles a students short C/C++ program each time it changes, generating audio and visualizing the waveforms and spectrums of the resulting signals. The immediacy of the system satisfies the curiosity of the impatient student that might not otherwise choose to learn C++.

Per-Sample C does not try to be a comprehensive framework or tool, like JUCE, and it does not necessarily encourage "best practices" for writing C++, but it is a starting point for students who want to jump directly into audio signal processing in C++.

We recognize Viznut's "Algorithmic symphonies from one line of code" as the single greatest inspiration for this system. Our primary goal is education, but the composition and performance of bytebeat and demoscene music is a secondary aim. More generally, we motivated by questions like "what would GLSL/shaders for audio look like?" and immediacy in programming.

This demo explores techniques in audio synthesis, effects, and music composition in terms of the Per-Sample C environment. These include:
  • Frequency Modulation
  • Phase Distortion
  • Biquad, One-Pole, and Comb Filters
  • Delay and Reverb
  • Waveshaping
  • Sequencing and Envelope Generation

See <https://youtu.be/FdgVW8AOcMs> for more information.
 

Speakers
avatar for Karl Yerkes

Karl Yerkes

Lecturer in Media Arts and Technology, University of California Santa Barbara
I'm a hacker working on interactive and distributed audiovisual systems. I teach and direct an electroacoustic ensemble focus on new interfaces for musical expression. I was an artist in residence at the SETI institute.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Procedural audio models for video games with Chunity
The demo will showcase several procedural audio models made in ChucK and integrated in Unity via Chunity. The aim of the demo is to introduce the topic to the attendees, encouraging them to experiment with the models in the Unity scene by changing their parameters. The demo uses ChucK as it's a fast prototyping, easy to understand language and, with Chunity, features one of the easiest ways to generate sound synthesis within Unity. All the code will available in a GitHub repository.

Speakers
avatar for Adrián Barahona-Ríos

Adrián Barahona-Ríos

PhD Student, University of York
I am PhD student at the EPSRC funded Centre for Doctoral Training in Intelligent Games and Game Intelligence (http://iggi.org.uk), based in York. From 2018 and in collaboration with Sony Interactive Entertainment Europe, I am researching about strategies to increase the efficiency... Read More →


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Prototyping interactive software and installation art with SuperCollider and JavaScript
Software infrastructure built to share state between Node.js and SuperCollider processes, with specific case studies including 3 interactive art installations and a web based touch screen generative music tool.  Some concepts borrowed from the popular Redux JavaScript library and integrated into SuperCollider's pattern (Pbind) framework.

Speakers
avatar for Colin Sullivan

Colin Sullivan

Creative Technologist
I am a creative software developer with experience building music-related tools. Happy to chat software architecture esp. as it relates to systems, web technologies, and the future of real-time audio.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Real-time interface for audio generation using CVAEs
Speakers
avatar for Sam Parke-Wolfe

Sam Parke-Wolfe

Software Developer, Sam Parke-Wolfe
Sam is an interdisciplinary software developer.Sam received a first class Bachelor of Science in Music Computing from Goldsmiths, University of London in 2017.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Recreating vintage divide-down synths
In 1983, Korg released the Poly-800, their reply to Roland's Juno-106. The Poly-800 was based on an arcade chip (MSM5232) and used a master clock divide-down system similar to drawbar organs and the Korg Delta before it. The synth adds together square waves to create an approximation to sawtooth waves, similar to how we can create a square wave by adding together sine waves. 

This simple technique makes a great starting point for your own DIY synth. At this demo you can explore how to build such a system whether with analogue parts, Arduinos, or in PureData. Or just come and play on a recreation of the Poly 800.

Speakers
avatar for David Blurton

David Blurton

Freelance audio developer
I build electronics and write DSP code from my home office in Reykjavik.Talk to me about reverb design!


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Seeing sounds - Creative audio/visual plugin experiments with JUCE
In this session Halldór and Ragnar talk about their collaborative quest for unique and original sounds through experiments with visual representation of audio. They present tools and prototypes created during this phase and a soon to be released audio plugin made with JUCE - a product of this explorative work.

Demo Outline
We will talk about the idea of creatively representing and manipulating audio in the visual domain - touching on other people's previous experiments and our study of them; inspirations, research and challenges of implementing real-time plugins with a specific visual aesthetic - not to mention the fickle relationship between the audio and visual domain. We'll show a series of playful experiments with image algorithms and graphical tools followed by a hands on demo and discussion.

Speakers
avatar for Halldór Eldjárn

Halldór Eldjárn

Audio developer, Inorganic Audio
I'm an Icelandic musician and a programmer. I write music, and work with other artists on creating new ways of expression. My main interest is augmenting creativity with technology, in a way that inspires the artist and affects her work.
avatar for Ragnar Hrafnkelsson

Ragnar Hrafnkelsson

Director | Developer, Reactify
I make software, tools and interactive experiences for music creation, consumption and performance.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Sieve - A plug-in for the intelligent browsing of drum samples using manifold learning
Sieve is an experimental audio plug-in that was designed to aid in the process of navigating and selecting from large collections of drum samples. Kick and snare drum samples are automatically organized based on sound similarity and visualized on a 2D grid interface inspired by hardware interfaces such as the Ableton Push. Organization of drum samples is achieved using audio feature extraction and manifold learning techniques. Development of the plug-in was based on research focussed on determining the best method for computationally characterizing kick and snare drum samples evaluated using audio classification taks.  A functioning version of Sieve will be available for demonstration alongside a poster presentation that will outline the research methodologies used, as well as the implementation of the plug-in that was carried out using the JUCE software framework.




Speakers
avatar for Jordie Shier

Jordie Shier

Student, University of Victoria
Jordie Shier is currently pursuing a master’s degree in computer science and music at the University of Victoria in Canada. He began research in this area at the end of his bachelor’s degree and has since presented work at the Workshop in Intelligent Music Production and AES NYC... Read More →


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

14:00 GMT

Panel: Developing audio solutions for accessibility
Audio companies large and small are discovering the increasing value of driving accessible support into their tools. This expert panel will help you share in the excitement that is taking place in this vital area with perspectives from product marketing managers, designers and engineers and will feature live demonstrations of some late-breaking solutions.  Please join us to learn how to pursue this development effectively and use the results to enhance your brand and grow your business.

Moderator
avatar for Ed Gray

Ed Gray

Director, Partnering Programs, Avid
A 24-year veteran of Avid, Ed Gray has multiple leadership roles in the company's Partnering Programs. A visually impaired user himself, Ed has worked with a passionate team of designers and testers to make accessibility a priority for the solutions offered by Avid and its partners... Read More →

Speakers
CB

Carl Bussey

Senior Software Engineer, Native Instruments
avatar for Amy Dickens

Amy Dickens

Developer Community Manager, Pusher
Amy is the Developer Community Manager at Pusher. In her day job she helps engage developer communities in building projects and hosting events that they are truly passionate about. As a certified accessibility specialist from the International Association of Accessibility Professionals... Read More →
avatar for Jason Dasent

Jason Dasent

Owner & CEO, Studio Jay Recording
Jason Dasent is the owner and operator of Studio Jay Recording Ltd. He has over 25 years experience in all aspects of recording and music production. Jason launched Studio Jay Recording in 2000 catering to both the Advertising Sector and Artiste Production for many top Caribbean recording... Read More →
avatar for Timothy Adnitt

Timothy Adnitt

Director | Music Production, Native Instruments
Tim is Director of Music Production at Native Instruments, leading the Maschine, Komplete Kontrol and Komplete Audio product portfolios. He has twenty years of experience in the Music Technology industry, having previously held positions at Sibelius Software, Avid and inMusic. Tim is... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Auditorium Puddle Dock, London EC4V 3DB

14:00 GMT

Immutable music: or, the time-travelling sequencer
What happens if we take a functional programming approach to audio development? This talk explores the possibilities, and difficulties, of using pure functions and immutable data to build sequencers, arpeggiators and groove boxes.

Along the way we'll discover weird outcomes when we treat time as a first-class citizen in our code. Timelines are stretched and inverted with ease; algorithms look into the future to make decisions in the present. An enhanced ability to compose our code means that we can experiment by combining musical concepts easily, even while our music plays back and without complicated 'reset' code. Feel like turning an arpeggiated drum pattern into a Bach-like canon with a heavy jazz swing? Shouldn't take a second.

But of course it's never all rainbows. What are the challenges of working with this approach? What are the limitations? Can it perform well enough for use in live situations? We'll look at data structures, optimisations and the things a pure function can never let us do.

This talk will feature some live demonstrations from the author's sequencer platform, which is built on these functional principles.

Speakers
avatar for Tom Maisey

Tom Maisey

Developer, Independent
I'm an independent audio developer who has been bitten by the functional programming bug. Now I'm working on an interactive sequencer and music composition environment based on these principles. It's written in a dialect of Lisp - if you want to chat just ask me why parentheses are... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Real-time processing on Android
In this talk, Don Turner and Atneya Nair will demonstrate how to build a real-time audio processing app on Android using Oboe - a C++ library for building high-performance audio apps.

The talk will cover best practices for obtaining low latency audio streams, advanced techniques for handling synchronous I/O, performing real-time signal processing, communicating with MIDI devices, obtaining optimal CPU bandwidth and ensuring your user experience is optimised for each user's Android device.
Much of the talk will be live-coded with plenty of strange sounds, musical distortion and cats.

Speakers
avatar for Don Turner

Don Turner

Developer, Google
Don helps developers to achieve the best possible audio experiences on Android. He spends a lot of time tinkering with synthesizers, MIDI controllers and audio workstations. He also has an unwavering love for Drum and Bass music and used to DJ in some of the worst clubs in Nottingham... Read More →
avatar for Atneya Nair

Atneya Nair

Intern, Audio Development, Google
I am a second year student at Georgia Tech studying Computer Science and Mathematics. I have particular interest in mathematical approaches to audio generation, processing and transformation. I recently completed an internship at Google focusing on Audio development on the Android... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

14:00 GMT

The business model canvas for audio entrepreneurs
Making audio products is often a labor of love. However, we may wonder how we can take things further and evolve our ideas into a business. When doing so, we must understand the customer, how our product serves them and how we expect to earn money.

This has traditionally been wrapped up in a complex, time-consuming and scary business plan but it doesn’t have to be that way. Many individuals, entrepreneurs, and larger companies have adopted Alex Osterwalder's Business model canvas as a lightweight alternative to a business plan that can be captured on a single sheet of paper. In the spirit of innovation, this allows us to quickly sketch out, capture and communicate our ideas, prototype new ones without huge time-investments of time.

This talk is for aspiring audio developers, designers, and managers looking to learn and adopt a commonplace product management tool into their business with examples from our industry.

Speakers
avatar for Ray Chemo

Ray Chemo

Native Instruments
Ray is a product person specialising in music technology. After ten years in the industry, he’s developed a diverse toolbox through a range of roles covering everything from sound design, customer support, software development, and product management. He's currently focused on B2B... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

A modern approach to microtuning
This talk will be a discussion on the challenges facing musicians and software developers when wishing to use or support microtonal tunings.
Solutions will be proposed to give both users and developers a more cohesive experience when employing microtonal tunings. 
The talk will cover:
  • Brief audio demonstrations and an introduction to microtonal scales and composition.
  • An overview of the current fragmented landscape of microtonal software and hardware.
  • A brief history of tuning methods and file formats. 
  • As a software developer, how can I best support microtonal tunings? 
  • A proposal for a new tuning format suitable for database storage.
  • Presentation of an open source SDK for the storage and retrieval of tuning data (TBA)
  • Visions of the future. 
The aim of this talk is that developers will be inspired to add support for microtonal tuning in their software. 

Speakers
avatar for Adam Wilson

Adam Wilson

Software Developer, Node Audio Ltd
Software developer and music producer. Interests: microtuning, just intonationWork: Mobile development, C++, Rust, Kotlin


Wednesday November 20, 2019 15:00 - 15:25 GMT
Lower River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Deploying plugins on hardware using open-source Elk Audio OS
Elk Audio OS significantly streamlines deploying great audio software across digital hardware devices, such as synthesizers, drum-machines and effect pedals.

We take care of the obstacles and specialist knowledge for deploying on extremely high-performance embedded Linux devices, letting developers concentrate on what they do best - creating awesome audio software and products.

In this talk we present the strategies and tools embodied in our platform, using which existing audio plugins can very efficiently be deployed. We will show how plugins can be built for Elk using various tools and frameworks, including JUCE.

An open-source version of Elk will be released shortly after the ADC conference, together with a multi-channel expansion Hat for Raspberry Pi. We will discuss the details of the open-source license, and the implications for releasing products based on Elk.

Speakers
avatar for Gustav Andersson

Gustav Andersson

Senior Software Engineer, Elk
Will code C++ and python for fun and profit. Developer, guitar player and electronic music producer with a deep fascination with everything that makes sounds in one form or another. Currently on my mind: modern C++ methods, DSP algos, vintage digital/analog hybrid synths.
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk
Computer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live performance and use by experts, mainly using C++.
avatar for Stefano Zambon

Stefano Zambon

CTO, Elk
Wearing several hats in a music tech startup building Elk Audio OS. Loves all aspects of music DSP from math-intense algorithms to low-level kernel hacking for squeezing latency and performance.


Wednesday November 20, 2019 15:00 - 15:25 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

15:00 GMT

Immerse yourself and be saved!
Since stereo began, audio has been delivered to listeners as separate ready-to-use channels, each of which is routed to a loudspeaker somewhere around them. Immersive Audio [IA] supplements this familiar approach with numerous unmixed objects: sources with nominal properties and positions. These objects are able to move, and must therefore be rendered by some means to fit whatever loudspeaker array is at hand. Like almost all brand-new music technology, IA is a morass of vested interests, unfinished software, ugly workflows, and hyperbolic statements made by people who really ought to know better.

Quite a few businesses and academics are striving to improve IA in different directions. For now, your choice as an Immersive-embracing producer is between picking a technology vendor and allocating them a chunk of your fixed and variable costs for evermore, or to go it alone with conventional tools and plug-ins and risk catastrophe. (Or, at worst, risk a passable 5.1 mix.) Things are just as confusing for customers.

If you're a music producer, where (and why) do you start? Where can you go to be inspired by examples of IA being used well? How do you deliver your finished project to consumers? As an engineer attempting to build the tools and infrastructures that support next year's workflows, what can you expect customers to demand of you?

And why should you — as a music lover, trained listener, and highly-educated consumer of music technology — care about any of this stuff? Behind the hype, is IA worthwhile? In a world where mixing audio into anything more than two channels is still a niche pursuit, will it ever matter?

I am a solo inventor in this field, and needed to research these questions in order to eat. I present the best answers I can, volunteer a perspective on the state of the art, and suggest the Immersive Audio equivalent of the Millennium Prize problems. If we can solve these together, we can all win.

Speakers
avatar for Ben Supper

Ben Supper

Ben Supper, Ben Supper
Ben obtained a PhD in the field of spatial psychoacoustics in 2005. Since then he has designed hardware, software, and DSP algorithms for various companies including Cadac, Focusrite, and ROLI.Last year, Ben left a perfectly good job at ROLI to return to the field of spatial acoustics... Read More →


Wednesday November 20, 2019 15:00 - 15:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Real-time dataflow for audio
A dataflow software architecture models computation as a directed graph, where the nodes are pure functions, and the edges between nodes are data. In addition to recent uses in deep learning, big data, and reactive programming, dataflow has long been an ideal fit for Digital Signal Processing (DSP). In a sense, deep learning's artificial neural networks can be thought of as DSP with large adaptive filters and non-linearities
Despite the success of dataflow in machine learning (ML) and DSP, there has not yet been to our knowledge a lightweight dataflow library that fulfills these requirements: small (under 50 Kbytes code), portable with few dependencies, open source, and most important: predictable performance suitable for embedded systems with real-time processing on the order of one millisecond per graph evaluation.
We describe a real-time dataflow architecture and initial C++ implementation for audio that meet these requirements, then explore the benefits of a unified view of ML and DSP. We also compare this C++ library approach to alternatives such as ROLI SOUL, which is based on a domain-specific programming language.

Speakers
avatar for Domingo Hui

Domingo Hui

Intern, Google
I am a fourth year student studying Mathematics and Computer Science at the University of Waterloo. Interested in functional programming, low-level systems and real-time performance. I recently completed an internship implementing parts of the Android Audio framework using a dataflow... Read More →
avatar for Glenn Kasten

Glenn Kasten

software engineer, Google
Glenn Kasten is a software engineer in the Android media team, with a focus on low-­level audio and performance. His background includes real-­time operating systems and embedded applications, and he enjoys playing piano.


Wednesday November 20, 2019 15:00 - 15:25 GMT
Auditorium Puddle Dock, London EC4V 3DB

15:30 GMT

Coffee break
Wednesday November 20, 2019 15:30 - 16:00 GMT
The Mermaid

15:30 GMT

Poster: Freesound API: add 400k+ sounds to your plugin!
Freesound is a collaborative database where users share sound effects, field recordings, musical samples and other audio material under Creative Commons licenses. Freesound offers both a website to interact with the database and a RESTful API which provides programmatic browsing and retrieving of sounds and other Freesound content.

Freesound currently contains more than 415k sounds that have been downloaded 139M times by 9M registered users. We present a JUCE client library which permits an easy integration of Freesound in JUCE projects. The presented library allows, among other things, to make use of the advanced text and audio-based Freesound search engine, to download and upload sounds, and the retrieval of a variety sound analysis information (i.e. audio features) from all Freesound content.
The code, together with examples and documentation is available at github.com/MTG/freesound-juce! Come check our poster to get to know more about this library!

Speakers
avatar for António Ramires

António Ramires

PhD Researcher, MTG - Universitat Pompeu Fabra
I'm a PhD student in the Music Technology Group of Universitat Pompeu Fabra, a music maker as Caucenus and a radio host in Rádio Universidade de Coimbra. My main area of research is music and audio signal processing, together with machine learning, towards the development of interfaces... Read More →
avatar for Frederic Font

Frederic Font

Post-doctoral researcher, Music Technology Group, Universitat Pompeu Fabra
I'm a researcher and developer at the Music Technology Group (MTG), Universitat Pompeu Fabra, Barcelona. The MTG is one of the biggest music technology research groups in Europe. There I coordinate the Freesound website and all Freesound-related projects that we carry out.


Wednesday November 20, 2019 15:30 - 16:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

15:30 GMT

Poster: Virtual analog modelling with automated nodal DK-method in real-time
OBS: Even if you don't have any previous knowledge in Virtual Analog Modelling, do join! You should however know basics of linear algebra, circuit schematics and solving of nonlinear systems.

The poster presents a prototyping framework for virtual analog modelling in the state-space domain. For this, the matrices that are needed for computation, are derived from circuits in a netlist representation in the application LTSpice.
Specifically, the following aspects are covered:
  • Basics of state-space modelling in the virtual analog domain and the challenges it imposes
  • Generating the matrices needed for computation
  • Debugging and testing framework in MATLAB®
  • Real-time implementation in the JUCE framework
  • Handling nonlinear circuit components
  • Current status of the project

Speakers
avatar for Daniel Struebig

Daniel Struebig

Student, Aalborg University Copenhagen
I'm very much interested in reverberation, occlusion/obstruction in the context of video games. Currently investigating the Project Acoustics at Microsoft. Apart from that, everything related to tools that empower sounddesigners in the context of video games.


Wednesday November 20, 2019 15:30 - 16:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

16:00 GMT

Panel: Machine learning
Speakers
avatar for Spencer Salazar

Spencer Salazar

Chief Technology Officer, Output
avatar for Francois Pachet

Francois Pachet

Director, Spotify
François got his Ph.D. and Habilitation from Université Pierre et Marie Curie (UPMC), after an engineer’s degree from Ecole des Ponts et Chaussées. He has been assistant professor in artificial intelligence at UPMC until 1997 when he joined SONY Computer Science Laboratory P... Read More →
avatar for Vamshi Raghu

Vamshi Raghu

Senior Manager, Knowles
Currently lead the developer and integrator experience engineering teams at Knowles Intelligent Audio.In past lives, I helped create music games at Electronic Arts, game audio authoring tools at Audiokinetic, and enabled audio experience features for early stage startups.
avatar for Russell McClellan

Russell McClellan

Principal Software Engineer, iZotope
Russell McClellan is Principal Software Engineer at iZotope, and has worked on several product lines, including RX, Insight, Neutron, Ozone, and VocalSynth. In his career, he's worked on DAWs, applications, plug-ins, audio interfaces, and web applications for musicians.


Wednesday November 20, 2019 16:00 - 16:50 GMT
Auditorium Puddle Dock, London EC4V 3DB

16:00 GMT

Developing iOS music apps with 3D UI using Swift, AudioKit and SceneKit
SceneKit and AudioKit are high-level APIs that greatly simplify the development process of the music apps on iOS and macOS, reduce the amount of boiler-plate code and allow to shorten the development time.

In this talk, an overview of these frameworks will be given. Common pitfalls and best development practices will be explained. I will introduce the MVVM pattern and give a quick introduction to RxSwift (only essential features needed to implement the MVVM pattern). Using this architectural pattern will allow us to build an app with dual 2D and 3D UI that user can switch, achieve a separation of presentation and business logic and improve testability.

In the second part of the talk, I would like to showcase in a step-by-step manner the development process of a sample iOS virtual synthesizer with 3D UI. The synthesizer will have such components as a piano keyboard, faders, rotary encoders and LCD screen.

Speakers
avatar for Alexander Obuschenko

Alexander Obuschenko

Independent
I’m a freelance mobile software engineer. While I’m not working on my client’s jobs, I’m doing audio and graphics programming. One of my recent music projects is a sequencer application with intrinsic support for polyrhythmic and pure intonation music. I would love to talk... Read More →


Wednesday November 20, 2019 16:00 - 16:50 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

16:00 GMT

High performance audio on iOS
Utilizing multiple cores for real-time audio processing is tricky. DSP work needs to be partitioned and distributed to threads, all the while minding real-time constraints: no locks, heap allocations, or unsafe system calls. Things get even trickier on Apple’s mobile devices, which are aggressively tuned to save energy and prolong battery life. As we'll see, even getting optimal single-threaded performance can be a challenge.

First we'll look at the high-level architecture of Apple's mobile processors and the challenges involved in striking a balance between energy usage and performance in the OS. Then we'll examine the frequency scaling and core switching behavior of Apple devices with the help of measurements. Finally, we'll explore ways of mitigating the impact of these power-saving measures on real-time workloads, both for single- and multi-threaded applications.

Speakers
avatar for Ryan Brown

Ryan Brown

Technical Principal, Ableton AG
Ryan is a technical principal on the audio engine team at Ableton and is passionate about building tools to inspire musicians.


Wednesday November 20, 2019 16:00 - 16:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

16:00 GMT

Loopers and bloopers
Multi-track loop stations are everywhere, yet there are scarce resources on how to build one from scratch using JUCE/C++.

The talk explains the design and implementation of a modular loop station that can record and playback MIDI notes live, on multiple tracks, at different tempos, time signatures and quantisations.
Expect nested playheads, sample-accurate-timing, thread-safety, computational efficiency, and thought bloopers.

There will be code. 

Speakers
avatar for Vlad Voina

Vlad Voina

Technical Director, Vocode
Making the next generation of digital tools for sound-and-light professionals. Ex-ROLI Software Engineer. Using JUCE to build audio plugins, plugin hosts, loop stations and interactive light installations. Committed to C++.


Wednesday November 20, 2019 16:00 - 16:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

17:00 GMT

A chat with Gerhard Behles, CEO & Co-Founder, Ableton
ADC's Executive Director, Jean-Baptiste Thiebaut, will be putting the questions to Gerhard Behles, as Ableton celebrates its 20th year.  

Speakers
avatar for Gerhard Behles

Gerhard Behles

CEO & Co-Founder, Ableton
Gerhard Behles grew up in Munich and developed an interest in electronic music as a teenager.  After a year at the Institute for Sonology in The Hague, he moved to Berlin to study computer science, where he also worked as a freelance electronic music writer, researcher, and computer... Read More →


Wednesday November 20, 2019 17:00 - 18:00 GMT
Auditorium Puddle Dock, London EC4V 3DB

18:00 GMT

Open Mic Night (preceded by pizza & drinks!)
Finishing off ADC with a bang, the Open Mic Night is always very popular.  The host this year will be SKoT McDonald, ROLI's Head of Sound R&D.  

Strictly limited to 5 mins, anyone can put themselves forward for a 5 min slot -comedy stand up, musical performance, sing a song, demo a bit of kit. Anything goes. But the 5 min restriction is very, very strict!

Open Mic will start at 7pm and will be preceded by pizza and a pay bar in the exhibition area and on the mezzanine.

Wednesday November 20, 2019 18:00 - 21:00 GMT
Auditorium Puddle Dock, London EC4V 3DB