Loading…

Log in to bookmark your favorites and sync them to your phone or calendar.

Presentation [clear filter]
Tuesday, November 19
 

11:00 GMT

11.00 - 11.25 Real-Time Applications with AVAudioEngine & 11.25 - 11.50 Audio Unit v3 Revisited
Limited Capacity seats available

11.00 - 11.25
Real-Time Applications with AVAudioEngine
AVAudioEngine provides a powerful, feature-rich API to achieve simple as well as complex tasks and to simplify real-time audio. This talk will give an introduction to AVAudioEngine and demonstrate the various ways it can be used. It will also focus on the new real-time additions introduced in macOS Catalina and iOS 13, and show how to integrate MIDI-only plug-ins with other engine nodes. The talk will conclude with a discussion of best practices and workflows with AVAudioEngine.

11.25 - 11.50
Audio Unit v3 Revisited
Version three of the Audio Unit API was originally introduced in macOS 10.11 and iOS 9, in the form of Audio Unit Extensions. It provides your app with sophisticated audio manipulation and processing capabilities, as well as allowing you to package your instruments and effects in self-contained units that are compatible with a wide variety of host applications. During this talk, we will explore some of the aspects of working with the Audio Unit v3 API. Topics we will touch on include:
- updating existing hosts to support Audio Unit Extensions
- porting existing Audio Units to the new API
- the new user preset API
- MIDI processor Audio Units






Speakers
avatar for Béla Balázs

Béla Balázs

Software Engineer, Core Audio Team, Apple
Béla Balázs is a Software Engineer on the Core Audio team at Apple, working on a variety of system frameworks, including APIs for Audio Units, AudioQueue and AVAudioEngine. Before joining Apple, he worked at Native Instruments in Berlin, on products including Maschine, Replika XT... Read More →
avatar for Peter Vasil

Peter Vasil

Software Engineer, Apple
Peter is an audio software developer with the Core Audio team at Apple, working on various APIs, such as AVAudioEngine, Audio Units and CoreMIDI. Before joining Apple, he was a key member of the Maschine development team at Native Instruments. He also worked at Ars Electronica Futurelab... Read More →


Tuesday November 19, 2019 11:00 - 11:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:00 GMT

A practical perspective on deep learning in audio software
Deep learning technology is already all around us, from ubiquitous image classification, facial recognition, and of course, animoji. These techniques have already improved the state of the art in some areas of audio processing, including source separation and voice synthesis. How can we, as audio application developers, take advantage of these new opportunities? 

The goal of this talk is to provide a practical perspective on the design and implementation of deep learning techniques for audio software. We'll start with a very brief and high-level overview about what deep learning is, and then survey some of the application areas created by the technology. We'll keep the discussion grounded on specific case studies rather than abstractions. 

Attendees should leave with a better understanding of what it would take, practically, to integrate modern deep learning techniques into their audio software products. 

Speakers
avatar for Russell McClellan

Russell McClellan

Principal Software Engineer, iZotope
Russell McClellan is Principal Software Engineer at iZotope, and has worked on several product lines, including RX, Insight, Neutron, Ozone, and VocalSynth. In his career, he's worked on DAWs, applications, plug-ins, audio interfaces, and web applications for musicians.


Tuesday November 19, 2019 11:00 - 11:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

11:00 GMT

Creating a modular control surface ecosystem
From the Seaboard GRAND to the latest release of LUMI Keys, ROLI has always pushed to create new forms of expression and interaction through a blend of innovative hardware and powerful software but this hasn't always been easy.  This talk will go into the history of the relationship between ROLI’s hardware and software products, but also the challenges these connections have created. From a product perspective, we’ll then go into the details of some of the most recent problems we have tried to solve and how we have redesigned the ecosystem to fit into a more open system of music making on desktop with ROLI Studio. We’ll talk about the development process, some technical details and also provide a demo from team members on the project.  Attendees should leave with a better understanding of the problems ROLI are trying to solve, a good background into our development process and maybe even an interest in helping grow this modular control surface ecosystem further!


Speakers
avatar for Elliot Greenhill

Elliot Greenhill

Sr Product Owner, ROLI Ltd
Elliot looks after the strategy and delivery of ROLI’s desktop software and has a passion for providing the best tools and experiences for music makers. Over the years Elliot has overseen the release of many of ROLI’s software products and the close relationship they have with... Read More →


Tuesday November 19, 2019 11:00 - 11:50 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

12:00 GMT

From jailbreaking iPhones to Grammy awarded albums: How mobile apps are shaping the future of music production
The iOS audio and music app landscape has evolved tremendously over the course of 10 years. Back when creating apps meant “jailbreaking” iPhones and installing heavily modified compilers and development tools, and today’s motivation of Apple to unify iOS & macOS, it is clear that the impact on audio apps will be massive.
Today we can browse thousands of music apps on the App Store, with the major names of the audio industry showing a growing interest in the mobile app landscape.
Back in 2008, BeatMaker was one of the first music apps available on iOS, right during the App Store launch. After three iterations, it was shaped around a moving industry, which started attracting different user profiles, from plain newcomers to renowned music producers. It took dedication and time to prove that tablets and smartphones were viable platforms to sketch musical ideas or compose full albums.
The talk will cover insights on the challenges, key moments, and share tips, and show how music apps impacted the music production world.
We will go through the evolution of the industry following this timeline:
  • The early days (2007-2008)
  • The rise of the App Store and Music Apps (2008 - 2019)
  • The mobile music production movement today
  • Future of music production
The talk will start with a clear comparison of the industry now and then, and how it evolved. Technical challenges will be exposed, and how they were overcome while keeping the innovation the main drive. We will then follow-up on how Apple and, more broadly, the industry, started recognizing iOS as a viable platform for music production. We will then review the challenges apps are facing nowadays, especially in an overcrowded marketplace like the App Store, and how we should prepare to adapt for the future.

Speakers
avatar for Mathieu Garcia

Mathieu Garcia

CEO, Mathieu Garcia
Mobile Apps Expert & Entrepreneur. I started writing apps months before the official App Store officially launched. At the time, it was a simple hobby, a side-project, within an uncharted territory. Today we can browse thousands of music apps on the App Store, with the major names... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Auditorium Puddle Dock, London EC4V 3DB

12:00 GMT

Introducing your robot lead singer (Demo 12.30 - 14.00)
In the last two years speech/singing synthesis technology has changed beyond recognition. Being able to create seamless copies of voices is a reality, and the manipulation of voice quality using synthesis techniques can now produce dynamic audio content that is impossible to differentiate from natural spoken output. More specifically, it will allow us to create artificial singing that is better than many human singers, graft expressive techniques from one singer to another, and using analysis-by-synthesis categorise and evaluate singing far beyond simple pitch estimation. In this talk we approach this expanding field in a modern context, give some examples, delve into the multi-faceted nature of singing user interface and obstacles still to overcome, illustrate a novel avenue of singing modification, and discuss the future trajectory of this powerful technology from Text-to-Speech to music and audio engineering platforms.

Many people are now enjoying producing original music by generating vocal tracks with software such as VOCALOID, UTAU and Realivox. However, quality has meant that such systems rarely play a part in professional music production. Recent advances in speech synthesis technology will render this an issue of the past, by offering extremely high quality audio output indistinguishable from natural voice. Yet how we interface and use such technology as a tool to support the artistic musical process is still in its infancy.
Users of these packages are presented with a classic piano roll interface to control the voice contained within, an environment inspired by MIDI-based synthesisers dating back to the early 1980s. Other singing synthesisers accept text input, musical score, MIDI, comprise of a suite of DSP routines with audio as input, and/or opt for the manipulation of a pre recorded sample library. In light of all these options however, currently available commercial singing synthesis generally struggles to offer the level of control over musical singing expression or style typically exploited by most real professional vocalists and composers.

The recent unveiling of CereVoice as a singing synthesiser demonstrates the ability to generate singing from spoken modelled data producing anything from a comically crooning Donald Trump to a robot-human duet on the ``Tonight Show starring Jimmy Fallon''. CereVoice's heritage as a mature Text-to-Speech technology with emotional and characterful control over its parametric speech synthesis engine offers novel insight in the ideal input that balances control and quality for its users. We exploit our unique position at the crossroads of speech technology, music information retrieval, and audio DSP to help illustrate our journey from speech to singing.
Voice technology is changing at breakneck speed. How we apply and interface with this technology in the musical domain is at a cusp. It is the ADC community that will, in the end, dictate how these new techniques are incorporated into the music technology of the future.

There will be a demo during the lunch break 12.30 - 14.00 (Tuesday 19 Nov) 

Speakers
avatar for Christopher Buchanan

Christopher Buchanan

Audio Development Engineer, CereProc
Chris Buchanan is Audio Development Engineer for CereProc. He graduated with Distinction in the Acoustics & Music Technology MSc degree at the University of Edinburgh in 2016, after 3 years as a signal processing geophysicist with French seismic imaging company CGG. He also holds... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

12:00 GMT

Life after the mixer - Software in live sound system applications
During the previous ADCs significant focus was given to software used in music production. But what happens to our precious audio signal once it leaves the mixing console during a live performance? Is there any cool or interesting software used in live sound system applications, which neither the artist nor the audience may be aware of? I'm here to tell you that there is! There are complex and compelling software algorithms involved in the planning and design of live entertainment events. Even during live performances software and signal processing play a vital role in bringing the music to every seat in the audience, bringing a smile to every face.

Starting from the planning and simulation phase of a large sound system, I'll give you a glimpse of the software tools which are used to design some of the biggest concerts out there. Some of these tools facilitate performance prediction, alignment, rigging, and also help ensure safety concerns are met before a single loudspeaker is hung. Using sophisticated mathematical models and high-resolution dispersion data, the wavefront of each loudspeaker within a line array can be synthesized. Precise simulations of the level distribution can then be calculated and viewed in a three-dimensional representation of the venue, which can significantly reduce setup and tuning time in touring applications.
 
The choice of loudspeakers can make a dramatic difference in attaining the desired level distribution. Active and passive cardioid loudspeaker designs, combined with the correct signal processing, can provide excellent sound directivity. Analogous to their better-known microphone counterparts, cardioid loudspeakers exhibit high levels emitted to the front, and low levels to the rear. Directivity enables a sound system to project sound to where it is needed (the audience) while keeping it away from areas where it is not desired, resulting in better sound quality and intelligibility.

Going over to the setup of the sound system at the actual venue, I will show you some of the tools available for checking the integrity of amplifiers and loudspeakers. Focusing specifically on line arrays, I will reveal the inner workings of Acoustic Neighbor Detection, a patented algorithm which helps ensure that individual loudspeakers within a line array or a sub-woofer array are positioned in the right order. This helps to avoid cabling errors, which are unfortunately quite common during time-critical setup procedures.

Finally, in addition to the common EQ, compressor and limiter options found on most modern power amplifiers (outside the scope of this talk), there is a significant amount of signal processing which can take place during the most important time of all, i.e. the live performance. An interesting challenge is achieving a truly uniform frequency response over large audience areas, while also compensating for air absorption effects over long distances. One of the techniques involves the use of individual sets of FIR and IIR filters for every single loudspeaker within a line array, each of which thus requiring a dedicated amplifier channel. These filters shape the sound generated by the array to precisely match a user defined level distribution and obtain the desired frequency response, achieving true "democracy for listeners".

Speakers
avatar for Bernardo Escalona

Bernardo Escalona

Software R&D, d&b audiotechnik GmbH
Software R&D at d&b audiotechnik. Bass player; music collector; motorbike freak; loves tacos.


Tuesday November 19, 2019 12:00 - 12:25 GMT
Lower River Room Puddle Dock, London EC4V 3DB

12:00 GMT

Units of measurement in C++
When writing interfaces in C++ that are expected to operate on variables that units of measurement, it is common for represent these variables as a numeric type, like float or int, and then describe the expected unit using variable names. For every different type of unit a library writer wants to support, a different function must be written (for example: setDelayTimeInSeconds(double timeInSeconds), setDelayTimeInMilliseconds(double timeInMilliseconds), setDelayTimeInSamples(size_t numSamples)). This is tedious, and if these functions are member functions, changes to a different part of the class can result in needing to change multiple functions in some way. Alternatively, having a single function that can only operate on one type of unit can make calls to that function verbose, especially if different call sites have different types of units that need to be converted before being passed into the function.

In this talk, we'll look at how we can use the type system to enforce type correctness of the units we operate on. Conversion operators can be used to reduce the amount of overloads needed to handle different types of input, as well as automatically handle converting between different units. We'll look at how unit types can be used with JUCE's ADSR and StateVariableFilter classes, how they can be integrated with an AudioProcessorValueTreeState, and, time permitting, how to use a type that represents a periodic value to simplify the writing of an oscillator class.

Speakers
avatar for Mark Jordan-Kamholz

Mark Jordan-Kamholz

Developer, Sinecure Audio
Mark Jordan-Kamholz is an Audio Software Developer and Designer. Currently based in Berlin, he has spent this year programming the sounds for Robert Henke's 8032av project in 6502 assembly. For the past 3 years, he has been making plugins in C++ using JUCE with his company Sinecure... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Audio APIs & driver models : under & above the hood
Audio software developers today have a multitude of programming interfaces available for playing back and recording audio on Desktops, Tablets, and Smart Phones.

These software interfaces ultimately provide a way to send/receive audio to/from hardware audio interfaces. This involves software abstraction layers, operating system layers, device driver layers, and hardware abstraction layers.

This talk aims to provide an overview of these layers and some details of how they interact with each other on different operating systems with different driver models.

The goal for the talk is to help audio programmers develop a sound understanding of various options available, and to understand how various layers interact under the hood. This understanding would help the developers choose the right software interface for their requirements with respect to latency, spatialization, performance, mixing, sample rate and format conversion, etc…

It would also help developers to be able to identify, diagnose, and fix problems in the audio pipeline. It would further enable them to extend the pre-existing libraries and wrappers for features that are not yet exposed by the libraries and wrappers.

Given the breadth of this topic, this talk will not attempt to dive deep, but it can become a starting point for developers who want to explore further.

Speakers
avatar for Devendra Parakh

Devendra Parakh

VP Software Development, Waves Inc.
Devendra Parakh is the VP, Software Development at Waves. He has been writing device drivers and applications for desktop and embedded platforms for more than twenty-five years. Most of his work in the past decade has been with Audio - both Professional and Consumer audio. In addition... Read More →


Tuesday November 19, 2019 14:00 - 14:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Porting the Hierarchical Music Specification Language (HMSL) to JUCE
See an experimental music system from the 1980's running on today's laptops.
HMSL is a language and interactive tool for music composition and performance. It was recently resurrected using JUCE. The author will describe features of the language and walk through many examples. The author will also discuss the problems encountered and solved during the port to JUCE.

HMSL features abstract multi-dimensional shapes, live-coding, MIDI tools, algorithmic composition utilities, score entry dialect, and a hierarchical scheduler. It also supports a cross-platform GUI toolkit and several editors.
Typical HMSL pieces might involve:
  • hyper-instruments with controls for harmonic complexity or density
  • MIDI ring networks for ensembles
  • dynamic just-intonation using a precursor to MPE
  • complex polyrhythms
  • algorithmic real-time development of a theme
  • real-time audio using a Motorola DSP 56000

HMSL is a set of object oriented extensions to the Forth language.


Speakers
avatar for Phil Burk

Phil Burk

Staff Software Engineer, Google Inc
Music and audio software developer. Interested in compositional tools and techniques, synthesis, and real-time performance on Android. Worked on HMSL, JForth, 3DO, PortAudio, JSyn, WebDrum, ListenUp, Sony PS3, Syntona, ME3000, Android MIDI, AAudio, Oboe and MIDI 2.0.


Tuesday November 19, 2019 14:00 - 14:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Creator Tools: Building Kontakt instruments without Kontakt
There aren't many software instruments that are as widely used 17 years after their first version as Kontakt. In this talk, we will briefly go through its history, focusing on how a strong community of content creators has been a central component of its success. From early sample libraries, to increasingly advanced instruments, it is the content hosted in Kontakt that makes so many musicians, producers and composers return to it, year after year.  

After identifying some key milestones in its evolution, we will dive deeper into its present and future, and the role the Creator Tools play in it. And going hands-on is the best way to do this, so we won't be shy about building a little something on stage.


Speakers
avatar for Dinos Vallianatos

Dinos Vallianatos

Product Owner, Authoring Platforms, Native Instruments


Tuesday November 19, 2019 15:00 - 15:25 GMT
Auditorium Puddle Dock, London EC4V 3DB

15:00 GMT

No tracks or graphs? Designing sound-based educational audio workstations.
The Compose with Sounds project was set up by a network of academics and teachers across the EU with the goals of increasing exposure to sound-based/electroacoustic practice in secondary schools and creating provisional tools with supporting teaching materials to further enhance usage and exposure to music technology amongst teenagers.  This talk will present two large software tools that have been designed as part of this ongoing project. A new digital audio workstation entitled Compose with Sounds (CwS), alongside a networked environment for experimental live performance, Compose with Sounds Live (CwS Live). Both of these are scheduled for free distribution in late 2019.
Unlike traditional audio workstations that are track or graph based, these tools and the interactions within are based on sound-objects.  This talk will present the trials and tribulations of developing these tools and the complex technical and UX dichotomies that emerged when utilising academics, teachers and students as active components in the development process.
Audio software tools designed to run in school classrooms (where the likelihood of access to high powered computers is small) require careful audio and UX optimisations so that they act as stepping stones to more industrial workstations.  The talk will discuss a collection of the unique audio optimisations that had to be made to enable the creation of these tools while maintaining minimal audio latency and jitter. Alongside this, it will present various approaches and concessions that had to be made to empower students to move onto more traditional track-based workstations. 

The talk will be broken down into four sections: exploring the requirements of pedagogical audio tools;  designing an approachable UX for sound-based music; audio optimisations required to enable true sound-based interactions;  the dichotomies of designing sound-based tools that empower users to subsequently utilise track or graph-based workstations. 



Speakers
avatar for Stephen Pearse

Stephen Pearse

Senior Lecturer, University of Portsmouth
C++ Audio Software engineer specialising in the creation of educational musical tools and environments.


Tuesday November 19, 2019 15:00 - 15:25 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

15:00 GMT

Sound Control: Enabling fast and flexible musical interface creation for disability (with demo)
This is an opportunity to learn about the story behind the development of the Sound Control software, a unique collaboration between computer music researcher and musician, Dr Rebecca Fiebrink and her team (at the time based at Goldsmiths London), specialist musicians and special needs music educators from Northamptonshire, and one of the country's largest music education Hubs, The Northamptonshire Music and Performing Arts Trust.

The Sound Control software will be demonstrated as part of the session. The software has been designed primarily to meet the musical needs of children and young people with profound and multiple learning disabilities, and the evolution of the sofrware into its present form has been profoundly influenced by these young people. 

Researcher, Sam Parke-Wolff. who worked with Dr Fiebrink on writing the software will be on hand to talk about technicalities and the machine-learning algorithms undering the program and its unique operation.

Speakers
avatar for Simon Steptoe

Simon Steptoe

Musical Inclusion Programme Manager, Northamptonshire Music and Performing Arts Trust
I currently work as Musical Inclusion Programme and Partnership Manager at Northamptonshire Music and Performing Arts Trust, which is the lead for the Music Education Hubs in Northamptonshire and Rutland.My role is to set up projects with children and young people in challenging circumstances... Read More →


Tuesday November 19, 2019 15:00 - 15:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

The many ways to play audio on the web
Modern browsers provide a variety of ways to play and create audio these days. As developers we have a lot of APIs to choose from. This talk is giving an overview about all the available APIs out there, what primary use case they have, how they can be combined, and whether or not they can be used already across all browsers.

For all of us who mostly know APIs like the Web Audio API as a compile target for DSP languages like FAUST or SOUL this talk will outline how those APIs work internally. What is for example the advantage of an AudioWorklet when compared to a ScriptProcessorNode? Why would one use an audio element with the Media Source Extensions API? How can external signals be consumed and sent out via WebRTC? Which browsers support the Web MIDI API to send and receive MIDI messages to and from connected devices? How can all that be kept in sync with the TimingObject? And the list goes on with many more APIs that exist to play, create, or control media on the web. 

Of course all of those APIs have a different kind of maturity and vary a lot when it comes to browser support. This talk aims to give an overview about the current state of audio on the web. It also tries to show how we as a community can get involved in the process of creating and updating all those APIs.

Speakers
avatar for Christoph Guttandin

Christoph Guttandin

Web Developer, Media Codings
I'm a freelance web developer specialized in building multi media web applications. I'm passionate about everything that can be used to create sound inside a browser. I've recently worked on streaming solutions and interactive music applications for clients like TV stations, streaming... Read More →


Tuesday November 19, 2019 15:00 - 15:25 GMT
Lower River Room Puddle Dock, London EC4V 3DB

15:50 GMT

Introduction to MIDI 2.0
What is MIDI 2.0, anyway? That is the mission of this session. We'll explain the current state of MIDI 2.0 specifications, and provide new detail of specifications to be completed soon. This will include brief reviews of MIDI-CI, Profile Configuration and Property Exchange. The focus will be the new MIDI 2.0 Protocol, with some details of the MIDI 2.0 packet and message designs and how MIDI-CI is used to achieve maximum interoperability with MIDI 1.0 and MIDI 2.0 devices. There will be little time for Q&A, but we'd love to talk shop with you at the MIDI table in the main hall.
The presenters are key architects of the MIDI 2.0 specifications in the MIDI Manufacturers Association.

Speakers
avatar for Mike Kent

Mike Kent

Owner, MK2 Image Ltd.
Mike is a synthesizer geek. Mike works as an independent consultant specializing in product development for audio, musical instruments, professional audio/video, and USB. Mike worked for Roland in R&D for 22 years. He has also contributed to audio or music projects for Yamaha, Apple... Read More →
avatar for Brett Porter

Brett Porter

Lead Engineer, Audio+Music, Art+Logic
Brett holds a B.M. in Composition and M.M. in Electronic/Computer Music from the University of Miami Frost School of Music. At Art+Logic since 1997, he's worked on custom software development projects of all kinds but prefers to focus on the pro audio and MI world.
avatar for Florian Bomers

Florian Bomers

Founder, Bome Software
Will translate MIDI for food. Florian Bömers has been using MIDI since the mid-80s and started programming audio and MIDI applications already in his childhood. Now he manages his company Bome Software, which creates standard software and hardware solutions for MIDI translation... Read More →


Tuesday November 19, 2019 15:50 - 16:40 GMT
Auditorium Puddle Dock, London EC4V 3DB

15:50 GMT

Offloading audio and ML compute to specialised low power SoCs
Likely audience
System designers, integrators and algorithm developers interested in building battery-powered audio and sensor hardware ranging from speaker boxes and mainstream consumer electronics, to toys, musical instruments and controllers.

Abstract
Audio, speech and gestural interfaces have become ubiquitous on mobile and wireless devices. This has driven the development of innovative low-power DSPs, that offer real-time vector and matrix math, specialised for audio and sensor DSP/ML processing, at a fraction of the battery power consumption.

This talk will cover different perspectives on offloading power-hungry real-time processing from the application processor, to a highly energy-efficient SoC (System on Chip).

Speakers
avatar for Vamshi Raghu

Vamshi Raghu

Senior Manager, Knowles
Currently lead the developer and integrator experience engineering teams at Knowles Intelligent Audio.In past lives, I helped create music games at Electronic Arts, game audio authoring tools at Audiokinetic, and enabled audio experience features for early stage startups.


Tuesday November 19, 2019 15:50 - 16:40 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:50 GMT

Run fast, sound great: Keys to successful voice management systems
An excessive number of sound sources (or voices) is a common problem faced in game audio development. This is especially true in complex, open world games when the player can travel anywhere in world and interact with thousands of entities. Voice count quickly becomes a problem that must be addressed, but keeping the audio aesthetic intact is a real challenge!

Frontier Development’s lead audio programmer, Will Augar, shows how the company’s voice management technology has evolved and gives a detailed insight into the latest system. The team’s focus on software engineering best practices, such as separation of concerns and “don’t repeat yourself” gives developers a great example of how to tackle complex systems with strict performance requirements.

This talk aims to show how audio programmers how they can develop performant voice management that do not compromise on mix quality.

Speakers
avatar for Will Augar

Will Augar

Principal Audio Programmer, Frontier Developments plc
Will Augar is the lead audio programmer at Frontier. He been a professional software developer for over 10 years working in both the games and music industries. He was an audio programmer at FreeStyleGames working on the DJ Hero series and has worked on Akai MPC at inMusic. In recent... Read More →


Tuesday November 19, 2019 15:50 - 16:40 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB
 
Wednesday, November 20
 

10:30 GMT

Developing a rich, cross-platform consumer experience for ROLI LUMI with JUCE, React Native and Unity
ROLI’s LUMI app aims to provide an attractive, fun, consumer-friendly way to learn how to play music in conjunction with the LUMI Keys keyboard.

A key challenge in developing the app was to rapidly develop and iterate on a user interface with game-quality visuals and slick native navigation, while harnessing the power of JUCE for the audio engine and making the app cross-platform across iOS and Android.

In this talk, the app’s lead developer will discuss how the team combined JUCE (with C++), React Native (with Javascript) and Unity (with C#) to achieve these goals.

The talk will cover:
  • An introduction to the LUMI app
  • A high-level overview of how each part of the tech stack works and the reasons for choosing it
  • Detail on how the three components co-exist, interface and interact with each other, both at a conceptual level and at a code level, with a particular focus on how JUCE and React Native are integrated
  • A discussion of the pros and cons of the overall approach, and learnings from the project

Speakers
avatar for Tom Duncalf

Tom Duncalf

Senior Software Engineer, Independent
Software developer currently leading development of the LUMI app for ROLI, using React Native, JUCE and Unity.


Wednesday November 20, 2019 10:30 - 11:20 GMT
Lower River Room Puddle Dock, London EC4V 3DB

10:30 GMT

Real-time 101 - part I: Investigating the real-time problem space
“Real-time” is a term that gets used a lot, but what does it really mean to different industries? What happens when our “real-time” system doesn’t perform in real-time? And how can we ensure that we don’t get in to this situation?
This talk aims to discuss what we mean by a real-time system, the practices that can be used to try and make sure it stays real-time and in particular how these can be subtly or accidentally abused increasing the risk of violating your real-time constraints.
We’ll take a detailed look at some of the considerations for real-time systems and the costs they involve such as system calls, allocations and priority inversion. Then the common tools in a programmer’s box such as mutexes, condition variables and atomics, how these interact with real-time threads and what costs they can incur.
This talk aims to ensure attendees of all experience levels leave with a solid understanding of the problems in the real-time domain and an overview of the tools commonly used to solve them.


Speakers
avatar for Dave Rowland

Dave Rowland

Software Developer, Tracktion
Dave Rowland is the director of software development at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio... Read More →
avatar for Fabian Renn-Giles

Fabian Renn-Giles

Software engineer, Fielding DSP Ltd.
Fabian Renn-Giles (PhD) is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Up until recently he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands... Read More →


Wednesday November 20, 2019 10:30 - 11:20 GMT
Auditorium Puddle Dock, London EC4V 3DB

10:30 GMT

Support of MIDI2 and MIDI-CI in VST3 instruments
Abstract:The recent extensions of the MIDI standard, namely MIDI 2.0 and MIDI CI (Capability Inquiry), generate many opportunities to develop hardware- and software-products, that excel previous products in terms of accuracy, expressiveness and convenience. While things should become easier for the users, the complexity of supporting MIDI as a developer will be significantly increased. In this presentation we will give a brief overview over these new MIDI extensions to then discuss, how these changes are reflected in the VST3 SDK and what plugin-developers need to do to make use of these new opportunities. Fortunately, many of these new capabilities can be supported with little to no effort, due to the design principles and features of VST3 which will be discussed, also. We may also briefly touch questions regarding support of these new MIDI capabilities from the perspective of hosting VST3 plugins. The presentation will start with giving short overviews over MIDI 2.0, MIDI-CI and VST3, to then dive into each specific MIDI extension to put it into context of the related concepts in VST3. This we will start with MIDI 2.0 – Per Note Controllers & VST3 – Note Expression, then we’ll look into MIDI 2.0 – Pitch handling methods and compare it to VST3. After that several further areas like
  • MIDI 2.0 - increased resolution
  • MIDI 2.0 - Channel groups
  • MIDI-CI - Program Lists
  • MIDI-CI - Recall State
will be put in context with VST3.
The presentation will be held by two senior developers of Steinberg, that have many years of experience in supporting and contributing to VST and in supporting MIDI inside the software products of Steinberg, especially Cubase and Nuendo.

Presenters:
  • Arne Scheffler is working as senior developer at Steinberg for 20 years in several areas. He is the main contributor to the cross-platform UI Framework VSTGUI.
  • Janne Roeper is working as senior developer for Steinberg since more than 20 years especially in the area of MIDI support in Cubase and Nuendo.
Both contribute to the VST specification since VST 2 and love making music.

Speakers
avatar for Janne Roeper

Janne Roeper

Software Developer, Steinberg
My interests are making music together with other musicians in realtime, music technology, especially expressive MIDI controllers, programming, composing, yoga, meditation, piano, keyboards, drums, bass and other instruments, agile methodologies, computers and technology in general... Read More →
avatar for Arne Scheffler

Arne Scheffler

Software Developer, Steinberg
I'm working at Steinberg for 20 years now and use Cubase since 30 years. I'm the maintainer and main contributor of the open source VSTGUI framework. If you want to know anything about VSTGUI, Cubase or Steinberg talk to me.


Wednesday November 20, 2019 10:30 - 11:20 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Building game audio plugins for the Unity engine
Unity (developed by Unity Technologies) is probably known as one of the most used and popular game engines in the recent years. It allows developing cross-platform 2D and 3D games and multimedia applications easily, which is one of its main strengths. Since Unity version 5, its Native Audio plugin SDK permits creating game audio plugins (DSP effects, synthesizers, etc) for extending the available range of factory effects that are bundled with the engine. But this is not the only way. This talk will cover some of the different approaches available for creating such plugins, how to get started, and it will also explain some details about the development of Voiceful Characters from Voctro Labs, a native plugin for Unity that allows synthesizing various types of voices using AI technology, with fully configurable parameters like timbre, speaking speed, gender or emotion.

Speakers
avatar for Jorge Garcia

Jorge Garcia

Software Consultant, Independent
I am an audio software consultant with more than 10 years of experience working in games, professional audio, broadcast and music. In this time I have participated in projects for MIDAS/Behringer, Codemasters, Activision, Mercury Steam and Electronic Arts as well as various record... Read More →


Wednesday November 20, 2019 11:30 - 11:55 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

11:30 GMT

Hybridizing FAUST and SOUL
 FAUST
FAUST (Functional Audio Stream) is a functional programming language for sound synthesis and audio processing, working at sample level, and with a strong focus on the design of synthesizers, musical instruments and audio effects. The core component of FAUST is its compiler, which translate any FAUST digital signal processing (DSP) specification to a wide range of non-domain specific languages such as C++, C, JAVA, LLVM bit code or WebAssembly. Thanks to a wrapping system called "architectures" codes generated by FAUST can be easily compiled into a wide variety of objects ranging from audio plugins to standalone applications or smartphone and Web applications.
 SOUL
The SOUL (SOUnd Language ) platform is a language and an API. The language is a small, carefully crafted DSL for writing the real-time parts of an audio algorithm. The API is designed to deploy that SOUL code to heterogeneous CPUs and DSPs, both locally and remotely. The SOUL language is secure by design and can be safely executed on remote devices, inside drivers, or inside bare-metal or realtime kernels, even without a sandbox. SOUL programs  are structured in a graph-like form. They can be JIT compiled and dynamically redeployed onto target processors with different parallelisation characteristics. 
Hybridizing FAUST and SOUL
Both approaches share common ideas: sample level DSP computation, fixed memory and CPU footprints, dynamic JIT compilation, CPU efficiency, multi-targets deployment (native and embedded platforms, web...).
After a possible Brexit, should each language and its developer community remain on their own territory? We do not think so: each approach has its advantages and disadvantages. Depending of the needs, some programmers prefer the imperative SOUL approach, others prefer the more declarative FAUST mathematical specification.

I will show how the two languages can be used together, and even possibly « hybridized », thanks to several tools developed this year with a close collaboration with SOUL developers: the Faust => SOUL backend now part of the Faust compilation chain, and several tools to help combining the two languages. Several working examples will then be demonstrated, during this 25 mins session, as well as during the "Build a synth with SOUL" worskop.


Speakers
avatar for Stéphane Letz

Stéphane Letz

Researcher, GRAME
Researcher at GRAME-CNCM in Lyon, France. Working on the Faust Audio DSP language and eco-system.


Wednesday November 20, 2019 11:30 - 11:55 GMT
Lower River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Live musical variations in JUCE
In this talk I will give insight in to my recent works with Icelandic artists Björk and Ólafur Arnalds. 
Together with them I have worked on creating plugins that are used in their live performances, manipulating both audio and MIDI. I will give a quick demonstration of how the plugins work and also describe the process of working with artists to bring their ideas to life, in JUCE code. 

From idea to prototype
How to take a broadly scoped idea and deliver a working prototype. I present the approaches taken in iterating on ideas with artists. When working on software in an artistic context, descriptions can often be very vague, or requirements hard to understand. But instead of just saying "No, that's not possible", it is possible to take a step back and look differently at the problem by emphasizing on what the outcome should look or sound like and work your way towards it in a practical way - without compromising the originality of the idea.

Speeding up the process
Integrateing freely available 3rd party libraries such as aubio and Faust with JUCE for fast idea validation and prototyping was essential in this project. I needed rudimentary pitch shifting, onset and pitch detection. Not haveing the resources to implement them myself before the deadline, I chose to use aubio and Faust with great results.

Speakers
avatar for Halldór Eldjárn

Halldór Eldjárn

Audio developer, Inorganic Audio
I'm an Icelandic musician and a programmer. I write music, and work with other artists on creating new ways of expression. My main interest is augmenting creativity with technology, in a way that inspires the artist and affects her work.


Wednesday November 20, 2019 11:30 - 11:55 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Real-time 101 - Part II: The real-time audio developer’s toolbox
“Real-time” is a term that gets used a lot, but what does it really mean to different industries? What happens when our “real-time” system doesn’t perform in real-time? And how can we ensure that we don’t get in to this situation?
This talk is presented in two parts. This is the second part which takes an in-depth look at the difficult problem of synchronization between real-time and non-real-time threads. This talk will share insights, tricks and design patterns, that the author has established over years of real-time audio programming, and has ultimately led to the creation of the open-source farbot library. At the end of this talk, you will be equipped with a set of simple design rules guiding you to the correct solution for various real-time challenges and synchronization situations.


Speakers
avatar for Dave Rowland

Dave Rowland

Software Developer, Tracktion
Dave Rowland is the director of software development at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio... Read More →
avatar for Fabian Renn-Giles

Fabian Renn-Giles

Software engineer, Fielding DSP Ltd.
Fabian Renn-Giles (PhD) is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Up until recently he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands... Read More →


Wednesday November 20, 2019 11:30 - 12:30 GMT
Auditorium Puddle Dock, London EC4V 3DB

12:05 GMT

"Did you hear that?" Learning to play video games from audio cues
The aim of this talk would be to introduce an exciting new area of research and seek interest from audio developers to join the project and offer technical expertise on the subject. This has been recently accepted for publication at the IEEE Conference on Games, research paper available here.

Talk abstract:

In this talk I will describe work in progress regarding an interesting direction of game-playing AI research: learning to play video games from audio cues only. I will highlight that current state-of-the-art techniques rely either on visuals or symbolic information to interpret their environment, whereas humans benefit from the processing of many other types of sensor inputs. Sounds and music are key elements in games, which not only affect player experience, but gameplay itself in certain scenarios. Sounds within games can be used to alert the player to a nearby hazard (especially when in darkness), inform them they collected an item, or provide clues for solving certain puzzles. This additional sensory output is different from traditionally visual information and allows for many new gameplay possibilities. 

Audio design in games also raises some important challenges when it comes to inclusivity and accessibility. People who may be partially or completely blind rely exclusively on audio, as well as some minor haptic feedback, to play many video games effectively. Including audio as well as visual information within a game can make completing it much more plausible for visually impaired players. Additionally, individuals with hearing difficulties would find it hard to play games that are heavily reliant on sound. Intelligent agents can help to evaluate games for individuals with disabilities: if an agent is able to successfully play a game using only audio or visual input, then this could help validate the game for the corresponding player demographics. 

Speakers
avatar for Raluca Gaina

Raluca Gaina

PhD Student, Queen Mary University of London
I am a research student interested in Artificial Intelligence for game playing. I'm looking to have conversations about game-playing AI using audio input (as opposed to, or complimenting, traditional visual or symbolic input), with regards to accessibility in games.


Wednesday November 20, 2019 12:05 - 12:30 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

12:05 GMT

Blueprint: Rendering React.js to JUCE
Blueprint is a hybrid C++/JavaScript library and JUCE module that lets you build native JUCE apps and audio plugins using React.js. This talk will introduce and demonstrate Blueprint by first briefly introducing React.js and explaining how it works under the hood, then discussing how Blueprint can leverage those inner workings to provide a juce::Component backend to React. We'll compare with alternative approaches to introducing web technologies to the JUCE stack, such as React Native and Electron, to show how Blueprint can offer a more lightweight, flexible, and familiar way of working with JUCE while leveraging the power and speed of React.js.

Speakers
avatar for Nick Thompson

Nick Thompson

Software Developer, Syng
Nick Thompson is the founder and developer of Creative Intent, where he's released three audio plug-ins, Temper, Tantrum, and Remnant. The latter showcases Blueprint, a new user interface library he developed that brings the power and flexibility of React.js to JUCE. He recently joined... Read More →


Wednesday November 20, 2019 12:05 - 12:30 GMT
Upper River Room Puddle Dock, London EC4V 3DB

12:05 GMT

How to prototype audio software
Prototyping your software before you build it for real saves you a lot of development time if you do it at all, and a lot more if you do it right. 
Making prototypes is helpful in most areas of design but it’s particularly important in music software because users need your interface to be as intuitive as possible to be able to work with it creatively to express themselves.
This talk will show you why prototyping is so important, how to do it, and what kinds of tools to use to make sure your users have the best possbile experience using your software!


Speakers
avatar for Marek Bereza

Marek Bereza

Director, Elf audio
Marek is an interaction designer and computer scientist from London, specializing in real-time audio and visuals. His work has found its way into interactive installations, art galleries, Super Bowl adverts, broadcast tv, live music performances, mobile apps and music videos. Previously... Read More →


Wednesday November 20, 2019 12:05 - 12:30 GMT
Lower River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Immutable music: or, the time-travelling sequencer
What happens if we take a functional programming approach to audio development? This talk explores the possibilities, and difficulties, of using pure functions and immutable data to build sequencers, arpeggiators and groove boxes.

Along the way we'll discover weird outcomes when we treat time as a first-class citizen in our code. Timelines are stretched and inverted with ease; algorithms look into the future to make decisions in the present. An enhanced ability to compose our code means that we can experiment by combining musical concepts easily, even while our music plays back and without complicated 'reset' code. Feel like turning an arpeggiated drum pattern into a Bach-like canon with a heavy jazz swing? Shouldn't take a second.

But of course it's never all rainbows. What are the challenges of working with this approach? What are the limitations? Can it perform well enough for use in live situations? We'll look at data structures, optimisations and the things a pure function can never let us do.

This talk will feature some live demonstrations from the author's sequencer platform, which is built on these functional principles.

Speakers
avatar for Tom Maisey

Tom Maisey

Developer, Independent
I'm an independent audio developer who has been bitten by the functional programming bug. Now I'm working on an interactive sequencer and music composition environment based on these principles. It's written in a dialect of Lisp - if you want to chat just ask me why parentheses are... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Real-time processing on Android
In this talk, Don Turner and Atneya Nair will demonstrate how to build a real-time audio processing app on Android using Oboe - a C++ library for building high-performance audio apps.

The talk will cover best practices for obtaining low latency audio streams, advanced techniques for handling synchronous I/O, performing real-time signal processing, communicating with MIDI devices, obtaining optimal CPU bandwidth and ensuring your user experience is optimised for each user's Android device.
Much of the talk will be live-coded with plenty of strange sounds, musical distortion and cats.

Speakers
avatar for Don Turner

Don Turner

Developer, Google
Don helps developers to achieve the best possible audio experiences on Android. He spends a lot of time tinkering with synthesizers, MIDI controllers and audio workstations. He also has an unwavering love for Drum and Bass music and used to DJ in some of the worst clubs in Nottingham... Read More →
avatar for Atneya Nair

Atneya Nair

Intern, Audio Development, Google
I am a second year student at Georgia Tech studying Computer Science and Mathematics. I have particular interest in mathematical approaches to audio generation, processing and transformation. I recently completed an internship at Google focusing on Audio development on the Android... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

14:00 GMT

The business model canvas for audio entrepreneurs
Making audio products is often a labor of love. However, we may wonder how we can take things further and evolve our ideas into a business. When doing so, we must understand the customer, how our product serves them and how we expect to earn money.

This has traditionally been wrapped up in a complex, time-consuming and scary business plan but it doesn’t have to be that way. Many individuals, entrepreneurs, and larger companies have adopted Alex Osterwalder's Business model canvas as a lightweight alternative to a business plan that can be captured on a single sheet of paper. In the spirit of innovation, this allows us to quickly sketch out, capture and communicate our ideas, prototype new ones without huge time-investments of time.

This talk is for aspiring audio developers, designers, and managers looking to learn and adopt a commonplace product management tool into their business with examples from our industry.

Speakers
avatar for Ray Chemo

Ray Chemo

Native Instruments
Ray is a product person specialising in music technology. After ten years in the industry, he’s developed a diverse toolbox through a range of roles covering everything from sound design, customer support, software development, and product management. He's currently focused on B2B... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

A modern approach to microtuning
This talk will be a discussion on the challenges facing musicians and software developers when wishing to use or support microtonal tunings.
Solutions will be proposed to give both users and developers a more cohesive experience when employing microtonal tunings. 
The talk will cover:
  • Brief audio demonstrations and an introduction to microtonal scales and composition.
  • An overview of the current fragmented landscape of microtonal software and hardware.
  • A brief history of tuning methods and file formats. 
  • As a software developer, how can I best support microtonal tunings? 
  • A proposal for a new tuning format suitable for database storage.
  • Presentation of an open source SDK for the storage and retrieval of tuning data (TBA)
  • Visions of the future. 
The aim of this talk is that developers will be inspired to add support for microtonal tuning in their software. 

Speakers
avatar for Adam Wilson

Adam Wilson

Software Developer, Node Audio Ltd
Software developer and music producer. Interests: microtuning, just intonationWork: Mobile development, C++, Rust, Kotlin


Wednesday November 20, 2019 15:00 - 15:25 GMT
Lower River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Deploying plugins on hardware using open-source Elk Audio OS
Elk Audio OS significantly streamlines deploying great audio software across digital hardware devices, such as synthesizers, drum-machines and effect pedals.

We take care of the obstacles and specialist knowledge for deploying on extremely high-performance embedded Linux devices, letting developers concentrate on what they do best - creating awesome audio software and products.

In this talk we present the strategies and tools embodied in our platform, using which existing audio plugins can very efficiently be deployed. We will show how plugins can be built for Elk using various tools and frameworks, including JUCE.

An open-source version of Elk will be released shortly after the ADC conference, together with a multi-channel expansion Hat for Raspberry Pi. We will discuss the details of the open-source license, and the implications for releasing products based on Elk.

Speakers
avatar for Gustav Andersson

Gustav Andersson

Senior Software Engineer, Elk
Will code C++ and python for fun and profit. Developer, guitar player and electronic music producer with a deep fascination with everything that makes sounds in one form or another. Currently on my mind: modern C++ methods, DSP algos, vintage digital/analog hybrid synths.
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk
Computer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live performance and use by experts, mainly using C++.
avatar for Stefano Zambon

Stefano Zambon

CTO, Elk
Wearing several hats in a music tech startup building Elk Audio OS. Loves all aspects of music DSP from math-intense algorithms to low-level kernel hacking for squeezing latency and performance.


Wednesday November 20, 2019 15:00 - 15:25 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

15:00 GMT

Immerse yourself and be saved!
Since stereo began, audio has been delivered to listeners as separate ready-to-use channels, each of which is routed to a loudspeaker somewhere around them. Immersive Audio [IA] supplements this familiar approach with numerous unmixed objects: sources with nominal properties and positions. These objects are able to move, and must therefore be rendered by some means to fit whatever loudspeaker array is at hand. Like almost all brand-new music technology, IA is a morass of vested interests, unfinished software, ugly workflows, and hyperbolic statements made by people who really ought to know better.

Quite a few businesses and academics are striving to improve IA in different directions. For now, your choice as an Immersive-embracing producer is between picking a technology vendor and allocating them a chunk of your fixed and variable costs for evermore, or to go it alone with conventional tools and plug-ins and risk catastrophe. (Or, at worst, risk a passable 5.1 mix.) Things are just as confusing for customers.

If you're a music producer, where (and why) do you start? Where can you go to be inspired by examples of IA being used well? How do you deliver your finished project to consumers? As an engineer attempting to build the tools and infrastructures that support next year's workflows, what can you expect customers to demand of you?

And why should you — as a music lover, trained listener, and highly-educated consumer of music technology — care about any of this stuff? Behind the hype, is IA worthwhile? In a world where mixing audio into anything more than two channels is still a niche pursuit, will it ever matter?

I am a solo inventor in this field, and needed to research these questions in order to eat. I present the best answers I can, volunteer a perspective on the state of the art, and suggest the Immersive Audio equivalent of the Millennium Prize problems. If we can solve these together, we can all win.

Speakers
avatar for Ben Supper

Ben Supper

Ben Supper, Ben Supper
Ben obtained a PhD in the field of spatial psychoacoustics in 2005. Since then he has designed hardware, software, and DSP algorithms for various companies including Cadac, Focusrite, and ROLI.Last year, Ben left a perfectly good job at ROLI to return to the field of spatial acoustics... Read More →


Wednesday November 20, 2019 15:00 - 15:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Real-time dataflow for audio
A dataflow software architecture models computation as a directed graph, where the nodes are pure functions, and the edges between nodes are data. In addition to recent uses in deep learning, big data, and reactive programming, dataflow has long been an ideal fit for Digital Signal Processing (DSP). In a sense, deep learning's artificial neural networks can be thought of as DSP with large adaptive filters and non-linearities
Despite the success of dataflow in machine learning (ML) and DSP, there has not yet been to our knowledge a lightweight dataflow library that fulfills these requirements: small (under 50 Kbytes code), portable with few dependencies, open source, and most important: predictable performance suitable for embedded systems with real-time processing on the order of one millisecond per graph evaluation.
We describe a real-time dataflow architecture and initial C++ implementation for audio that meet these requirements, then explore the benefits of a unified view of ML and DSP. We also compare this C++ library approach to alternatives such as ROLI SOUL, which is based on a domain-specific programming language.

Speakers
avatar for Domingo Hui

Domingo Hui

Intern, Google
I am a fourth year student studying Mathematics and Computer Science at the University of Waterloo. Interested in functional programming, low-level systems and real-time performance. I recently completed an internship implementing parts of the Android Audio framework using a dataflow... Read More →
avatar for Glenn Kasten

Glenn Kasten

software engineer, Google
Glenn Kasten is a software engineer in the Android media team, with a focus on low-­level audio and performance. His background includes real-­time operating systems and embedded applications, and he enjoys playing piano.


Wednesday November 20, 2019 15:00 - 15:25 GMT
Auditorium Puddle Dock, London EC4V 3DB

16:00 GMT

Developing iOS music apps with 3D UI using Swift, AudioKit and SceneKit
SceneKit and AudioKit are high-level APIs that greatly simplify the development process of the music apps on iOS and macOS, reduce the amount of boiler-plate code and allow to shorten the development time.

In this talk, an overview of these frameworks will be given. Common pitfalls and best development practices will be explained. I will introduce the MVVM pattern and give a quick introduction to RxSwift (only essential features needed to implement the MVVM pattern). Using this architectural pattern will allow us to build an app with dual 2D and 3D UI that user can switch, achieve a separation of presentation and business logic and improve testability.

In the second part of the talk, I would like to showcase in a step-by-step manner the development process of a sample iOS virtual synthesizer with 3D UI. The synthesizer will have such components as a piano keyboard, faders, rotary encoders and LCD screen.

Speakers
avatar for Alexander Obuschenko

Alexander Obuschenko

Independent
I’m a freelance mobile software engineer. While I’m not working on my client’s jobs, I’m doing audio and graphics programming. One of my recent music projects is a sequencer application with intrinsic support for polyrhythmic and pure intonation music. I would love to talk... Read More →


Wednesday November 20, 2019 16:00 - 16:50 GMT
Queenhithe Room Puddle Dock, London EC4V 3DB

16:00 GMT

High performance audio on iOS
Utilizing multiple cores for real-time audio processing is tricky. DSP work needs to be partitioned and distributed to threads, all the while minding real-time constraints: no locks, heap allocations, or unsafe system calls. Things get even trickier on Apple’s mobile devices, which are aggressively tuned to save energy and prolong battery life. As we'll see, even getting optimal single-threaded performance can be a challenge.

First we'll look at the high-level architecture of Apple's mobile processors and the challenges involved in striking a balance between energy usage and performance in the OS. Then we'll examine the frequency scaling and core switching behavior of Apple devices with the help of measurements. Finally, we'll explore ways of mitigating the impact of these power-saving measures on real-time workloads, both for single- and multi-threaded applications.

Speakers
avatar for Ryan Brown

Ryan Brown

Software Engineer, Ableton AG
Ryan Brown has been a software engineer at Ableton for over 8 years, working primarily on audio engine topics. He is passionate about building tools that inspire musicians.


Wednesday November 20, 2019 16:00 - 16:50 GMT
Lower River Room Puddle Dock, London EC4V 3DB

16:00 GMT

Loopers and bloopers
Multi-track loop stations are everywhere, yet there are scarce resources on how to build one from scratch using JUCE/C++.

The talk explains the design and implementation of a modular loop station that can record and playback MIDI notes live, on multiple tracks, at different tempos, time signatures and quantisations.
Expect nested playheads, sample-accurate-timing, thread-safety, computational efficiency, and thought bloopers.

There will be code. 

Speakers
avatar for Vlad Voina

Vlad Voina

Technical Director, Vocode
Making the next generation of digital tools for sound-and-light professionals. Ex-ROLI Software Engineer. Using JUCE to build audio plugins, plugin hosts, loop stations and interactive light installations. Committed to C++.


Wednesday November 20, 2019 16:00 - 16:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB