Loading…
Upper River Room [clear filter]
Monday, November 18
 

12:30 GMT

Lunch
Monday November 18, 2019 12:30 - 14:00 GMT
Upper River Room Puddle Dock, London EC4V 3DB

18:00 GMT

Welcome Party & Social Mixer in partnership with Apple: Quiz & Pizza
Limited Capacity seats available

Monday November 18, 2019 18:00 - 21:30 GMT
Upper River Room Puddle Dock, London EC4V 3DB
 
Tuesday, November 19
 

11:00 GMT

11.00 - 11.25 Real-Time Applications with AVAudioEngine & 11.25 - 11.50 Audio Unit v3 Revisited
Limited Capacity seats available

11.00 - 11.25
Real-Time Applications with AVAudioEngine
AVAudioEngine provides a powerful, feature-rich API to achieve simple as well as complex tasks and to simplify real-time audio. This talk will give an introduction to AVAudioEngine and demonstrate the various ways it can be used. It will also focus on the new real-time additions introduced in macOS Catalina and iOS 13, and show how to integrate MIDI-only plug-ins with other engine nodes. The talk will conclude with a discussion of best practices and workflows with AVAudioEngine.

11.25 - 11.50
Audio Unit v3 Revisited
Version three of the Audio Unit API was originally introduced in macOS 10.11 and iOS 9, in the form of Audio Unit Extensions. It provides your app with sophisticated audio manipulation and processing capabilities, as well as allowing you to package your instruments and effects in self-contained units that are compatible with a wide variety of host applications. During this talk, we will explore some of the aspects of working with the Audio Unit v3 API. Topics we will touch on include:
- updating existing hosts to support Audio Unit Extensions
- porting existing Audio Units to the new API
- the new user preset API
- MIDI processor Audio Units






Speakers
avatar for Béla Balázs

Béla Balázs

Software Engineer, Core Audio Team, Apple
Béla Balázs is a Software Engineer on the Core Audio team at Apple, working on a variety of system frameworks, including APIs for Audio Units, AudioQueue and AVAudioEngine. Before joining Apple, he worked at Native Instruments in Berlin, on products including Maschine, Replika XT... Read More →
PV

Peter Vasil

Software Engineer, Apple
Peter is an audio software developer with the Core Audio team at Apple, working on various APIs, such as AVAudioEngine, Audio Units and CoreMIDI. Before joining Apple, he was a key member of the Maschine development team at Native Instruments. He also worked at Ars Electronica Futurelab... Read More →


Tuesday November 19, 2019 11:00 - 11:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

12:00 GMT

Units of measurement in C++
When writing interfaces in C++ that are expected to operate on variables that units of measurement, it is common for represent these variables as a numeric type, like float or int, and then describe the expected unit using variable names. For every different type of unit a library writer wants to support, a different function must be written (for example: setDelayTimeInSeconds(double timeInSeconds), setDelayTimeInMilliseconds(double timeInMilliseconds), setDelayTimeInSamples(size_t numSamples)). This is tedious, and if these functions are member functions, changes to a different part of the class can result in needing to change multiple functions in some way. Alternatively, having a single function that can only operate on one type of unit can make calls to that function verbose, especially if different call sites have different types of units that need to be converted before being passed into the function.

In this talk, we'll look at how we can use the type system to enforce type correctness of the units we operate on. Conversion operators can be used to reduce the amount of overloads needed to handle different types of input, as well as automatically handle converting between different units. We'll look at how unit types can be used with JUCE's ADSR and StateVariableFilter classes, how they can be integrated with an AudioProcessorValueTreeState, and, time permitting, how to use a type that represents a periodic value to simplify the writing of an oscillator class.

Speakers
avatar for Mark Jordan-Kamholz

Mark Jordan-Kamholz

Developer, Sinecure Audio
Mark Jordan-Kamholz is an Audio Software Developer and Designer. Currently based in Berlin, he has spent this year programming the sounds for Robert Henke's 8032av project in 6502 assembly. For the past 3 years, he has been making plugins in C++ using JUCE with his company Sinecure... Read More →


Tuesday November 19, 2019 12:00 - 12:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

14:00 GMT

Porting the Hierarchical Music Specification Language (HMSL) to JUCE
See an experimental music system from the 1980's running on today's laptops.
HMSL is a language and interactive tool for music composition and performance. It was recently resurrected using JUCE. The author will describe features of the language and walk through many examples. The author will also discuss the problems encountered and solved during the port to JUCE.

HMSL features abstract multi-dimensional shapes, live-coding, MIDI tools, algorithmic composition utilities, score entry dialect, and a hierarchical scheduler. It also supports a cross-platform GUI toolkit and several editors.
Typical HMSL pieces might involve:
  • hyper-instruments with controls for harmonic complexity or density
  • MIDI ring networks for ensembles
  • dynamic just-intonation using a precursor to MPE
  • complex polyrhythms
  • algorithmic real-time development of a theme
  • real-time audio using a Motorola DSP 56000

HMSL is a set of object oriented extensions to the Forth language.


Speakers
avatar for Phil Burk

Phil Burk

Staff Software Engineer, Google Inc
Music and audio software developer. Interested in compositional tools and techniques, synthesis, and real-time performance on Android. Worked on HMSL, JForth, 3DO, PortAudio, JSyn, WebDrum, ListenUp, Sony PS3, Syntona, ME3000, Android MIDI, AAudio, Oboe and MIDI 2.0.


Tuesday November 19, 2019 14:00 - 14:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Sound Control: Enabling fast and flexible musical interface creation for disability (with demo)
This is an opportunity to learn about the story behind the development of the Sound Control software, a unique collaboration between computer music researcher and musician, Dr Rebecca Fiebrink and her team (at the time based at Goldsmiths London), specialist musicians and special needs music educators from Northamptonshire, and one of the country's largest music education Hubs, The Northamptonshire Music and Performing Arts Trust.

The Sound Control software will be demonstrated as part of the session. The software has been designed primarily to meet the musical needs of children and young people with profound and multiple learning disabilities, and the evolution of the sofrware into its present form has been profoundly influenced by these young people. 

Researcher, Sam Parke-Wolff. who worked with Dr Fiebrink on writing the software will be on hand to talk about technicalities and the machine-learning algorithms undering the program and its unique operation.

Speakers
avatar for Simon Steptoe

Simon Steptoe

Musical Inclusion Programme Manager, Northamptonshire Music and Performing Arts Trust
I currently work as Musical Inclusion Programme and Partnership Manager at Northamptonshire Music and Performing Arts Trust, which is the lead for the Music Education Hubs in Northamptonshire and Rutland.My role is to set up projects with children and young people in challenging circumstances... Read More →


Tuesday November 19, 2019 15:00 - 15:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:50 GMT

Offloading audio and ML compute to specialised low power SoCs
Likely audience
System designers, integrators and algorithm developers interested in building battery-powered audio and sensor hardware ranging from speaker boxes and mainstream consumer electronics, to toys, musical instruments and controllers.

Abstract
Audio, speech and gestural interfaces have become ubiquitous on mobile and wireless devices. This has driven the development of innovative low-power DSPs, that offer real-time vector and matrix math, specialised for audio and sensor DSP/ML processing, at a fraction of the battery power consumption.

This talk will cover different perspectives on offloading power-hungry real-time processing from the application processor, to a highly energy-efficient SoC (System on Chip).

Speakers
avatar for Vamshi Raghu

Vamshi Raghu

Senior Manager, Knowles
Currently lead the developer and integrator experience engineering teams at Knowles Intelligent Audio.In past lives, I helped create music games at Electronic Arts, game audio authoring tools at Audiokinetic, and enabled audio experience features for early stage startups.


Tuesday November 19, 2019 15:50 - 16:40 GMT
Upper River Room Puddle Dock, London EC4V 3DB
 
Wednesday, November 20
 

10:30 GMT

Support of MIDI2 and MIDI-CI in VST3 instruments
Abstract:The recent extensions of the MIDI standard, namely MIDI 2.0 and MIDI CI (Capability Inquiry), generate many opportunities to develop hardware- and software-products, that excel previous products in terms of accuracy, expressiveness and convenience. While things should become easier for the users, the complexity of supporting MIDI as a developer will be significantly increased. In this presentation we will give a brief overview over these new MIDI extensions to then discuss, how these changes are reflected in the VST3 SDK and what plugin-developers need to do to make use of these new opportunities. Fortunately, many of these new capabilities can be supported with little to no effort, due to the design principles and features of VST3 which will be discussed, also. We may also briefly touch questions regarding support of these new MIDI capabilities from the perspective of hosting VST3 plugins. The presentation will start with giving short overviews over MIDI 2.0, MIDI-CI and VST3, to then dive into each specific MIDI extension to put it into context of the related concepts in VST3. This we will start with MIDI 2.0 – Per Note Controllers & VST3 – Note Expression, then we’ll look into MIDI 2.0 – Pitch handling methods and compare it to VST3. After that several further areas like
  • MIDI 2.0 - increased resolution
  • MIDI 2.0 - Channel groups
  • MIDI-CI - Program Lists
  • MIDI-CI - Recall State
will be put in context with VST3.
The presentation will be held by two senior developers of Steinberg, that have many years of experience in supporting and contributing to VST and in supporting MIDI inside the software products of Steinberg, especially Cubase and Nuendo.

Presenters:
  • Arne Scheffler is working as senior developer at Steinberg for 20 years in several areas. He is the main contributor to the cross-platform UI Framework VSTGUI.
  • Janne Roeper is working as senior developer for Steinberg since more than 20 years especially in the area of MIDI support in Cubase and Nuendo.
Both contribute to the VST specification since VST 2 and love making music.

Speakers
avatar for Janne Roeper

Janne Roeper

Software Developer, Steinberg
My interests are making music together with other musicians in realtime, music technology, especially expressive MIDI controllers, programming, composing, yoga, meditation, piano, keyboards, drums, bass and other instruments, agile methodologies, computers and technology in general... Read More →
avatar for Arne Scheffler

Arne Scheffler

Software Developer, Steinberg
I'm working at Steinberg for 20 years now and use Cubase since 30 years. I'm the maintainer and main contributor of the open source VSTGUI framework. If you want to know anything about VSTGUI, Cubase or Steinberg talk to me.


Wednesday November 20, 2019 10:30 - 11:20 GMT
Upper River Room Puddle Dock, London EC4V 3DB

11:30 GMT

Live musical variations in JUCE
In this talk I will give insight in to my recent works with Icelandic artists Björk and Ólafur Arnalds. 
Together with them I have worked on creating plugins that are used in their live performances, manipulating both audio and MIDI. I will give a quick demonstration of how the plugins work and also describe the process of working with artists to bring their ideas to life, in JUCE code. 

From idea to prototype
How to take a broadly scoped idea and deliver a working prototype. I present the approaches taken in iterating on ideas with artists. When working on software in an artistic context, descriptions can often be very vague, or requirements hard to understand. But instead of just saying "No, that's not possible", it is possible to take a step back and look differently at the problem by emphasizing on what the outcome should look or sound like and work your way towards it in a practical way - without compromising the originality of the idea.

Speeding up the process
Integrateing freely available 3rd party libraries such as aubio and Faust with JUCE for fast idea validation and prototyping was essential in this project. I needed rudimentary pitch shifting, onset and pitch detection. Not haveing the resources to implement them myself before the deadline, I chose to use aubio and Faust with great results.

Speakers
avatar for Halldór Eldjárn

Halldór Eldjárn

Audio developer, Inorganic Audio
I'm an Icelandic musician and a programmer. I write music, and work with other artists on creating new ways of expression. My main interest is augmenting creativity with technology, in a way that inspires the artist and affects her work.


Wednesday November 20, 2019 11:30 - 11:55 GMT
Upper River Room Puddle Dock, London EC4V 3DB

12:05 GMT

Blueprint: Rendering React.js to JUCE
Blueprint is a hybrid C++/JavaScript library and JUCE module that lets you build native JUCE apps and audio plugins using React.js. This talk will introduce and demonstrate Blueprint by first briefly introducing React.js and explaining how it works under the hood, then discussing how Blueprint can leverage those inner workings to provide a juce::Component backend to React. We'll compare with alternative approaches to introducing web technologies to the JUCE stack, such as React Native and Electron, to show how Blueprint can offer a more lightweight, flexible, and familiar way of working with JUCE while leveraging the power and speed of React.js.

Speakers
avatar for Nick Thompson

Nick Thompson

Software Developer, Syng
Nick Thompson is the founder and developer of Creative Intent, where he's released three audio plug-ins, Temper, Tantrum, and Remnant. The latter showcases Blueprint, a new user interface library he developed that brings the power and flexibility of React.js to JUCE. He recently joined... Read More →


Wednesday November 20, 2019 12:05 - 12:30 GMT
Upper River Room Puddle Dock, London EC4V 3DB

14:00 GMT

The business model canvas for audio entrepreneurs
Making audio products is often a labor of love. However, we may wonder how we can take things further and evolve our ideas into a business. When doing so, we must understand the customer, how our product serves them and how we expect to earn money.

This has traditionally been wrapped up in a complex, time-consuming and scary business plan but it doesn’t have to be that way. Many individuals, entrepreneurs, and larger companies have adopted Alex Osterwalder's Business model canvas as a lightweight alternative to a business plan that can be captured on a single sheet of paper. In the spirit of innovation, this allows us to quickly sketch out, capture and communicate our ideas, prototype new ones without huge time-investments of time.

This talk is for aspiring audio developers, designers, and managers looking to learn and adopt a commonplace product management tool into their business with examples from our industry.

Speakers
avatar for Ray Chemo

Ray Chemo

Native Instruments
Ray is a product person specialising in music technology. After ten years in the industry, he’s developed a diverse toolbox through a range of roles covering everything from sound design, customer support, software development, and product management. He's currently focused on B2B... Read More →


Wednesday November 20, 2019 14:00 - 14:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB

15:00 GMT

Immerse yourself and be saved!
Since stereo began, audio has been delivered to listeners as separate ready-to-use channels, each of which is routed to a loudspeaker somewhere around them. Immersive Audio [IA] supplements this familiar approach with numerous unmixed objects: sources with nominal properties and positions. These objects are able to move, and must therefore be rendered by some means to fit whatever loudspeaker array is at hand. Like almost all brand-new music technology, IA is a morass of vested interests, unfinished software, ugly workflows, and hyperbolic statements made by people who really ought to know better.

Quite a few businesses and academics are striving to improve IA in different directions. For now, your choice as an Immersive-embracing producer is between picking a technology vendor and allocating them a chunk of your fixed and variable costs for evermore, or to go it alone with conventional tools and plug-ins and risk catastrophe. (Or, at worst, risk a passable 5.1 mix.) Things are just as confusing for customers.

If you're a music producer, where (and why) do you start? Where can you go to be inspired by examples of IA being used well? How do you deliver your finished project to consumers? As an engineer attempting to build the tools and infrastructures that support next year's workflows, what can you expect customers to demand of you?

And why should you — as a music lover, trained listener, and highly-educated consumer of music technology — care about any of this stuff? Behind the hype, is IA worthwhile? In a world where mixing audio into anything more than two channels is still a niche pursuit, will it ever matter?

I am a solo inventor in this field, and needed to research these questions in order to eat. I present the best answers I can, volunteer a perspective on the state of the art, and suggest the Immersive Audio equivalent of the Millennium Prize problems. If we can solve these together, we can all win.

Speakers
avatar for Ben Supper

Ben Supper

Owner, Supperware Ltd
Ben has been an AES member since 1998. He is both an undergraduate and PhD postgraduate from the University of Surrey, has worked in R&D for Cadac and Focusrite, and ran ROLI's R&D team for a few years.Since 2018, Ben has been working as an independent consultant and inventor, designing... Read More →


Wednesday November 20, 2019 15:00 - 15:25 GMT
Upper River Room Puddle Dock, London EC4V 3DB

16:00 GMT

Loopers and bloopers
Multi-track loop stations are everywhere, yet there are scarce resources on how to build one from scratch using JUCE/C++.

The talk explains the design and implementation of a modular loop station that can record and playback MIDI notes live, on multiple tracks, at different tempos, time signatures and quantisations.
Expect nested playheads, sample-accurate-timing, thread-safety, computational efficiency, and thought bloopers.

There will be code. 

Speakers
avatar for Vlad Voina

Vlad Voina

Founder, vocode.io
Making the next generation of digital tools for sound-and-light professionals. Ex-ROLI Software Engineer. Using JUCE to build audio plugins, plugin hosts, loop stations and interactive light installations. Committed to C++.


Wednesday November 20, 2019 16:00 - 16:50 GMT
Upper River Room Puddle Dock, London EC4V 3DB
 
Filter sessions
Apply filters to sessions.