Loading…

Log in to bookmark your favorites and sync them to your phone or calendar.

Demo [clear filter]
Tuesday, November 19
 

12:30 GMT

Demo: BandLoop - from idea to reality
#TRIBE Instruments#

TRIBE Loopz (aka BandLoop) is a tool which adapts the live-looping technique to the workflow of bands. It allows each player of the band to overlays (in real time) loops of different (or the same) instruments. Adding loops on top of the others, a n-limited-piece band can create a piece of music composed by an unlimited number of instruments.  
Loopz is composed by a software implemented with JUCE and a bunch of pedals. The Pedals send wireless messages to software which is in charge of collecting the audio inputs from an external audio interface and live looping the audio materials. The decision of implementing Wireless technology was taken in order to be adaptable to the spatial exigency of the band which members are spatially distributed on the stage.

Speakers
avatar for Giovanni Cassanelli

Giovanni Cassanelli

Student, TRIBE-Instruments
I am a Junior Software Developer, graduated at Goldsmiths, University of London.I specialized in Audio Programming,I am currently working as Data Analyst for a growing Record Label (cell-recordings.net).At the same time, I am trying to launch my personal project : TRIBE Loopz (ht... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Bytebeat - fractal self-similarity in algorithmic music
Bytebeat is a music synthesis technique which principle of operation is based on that an audio stream of normally 8-bit PCM samples is generated using a function of a sample index. The function corresponds to a program that employs bitwise (and, or, xor, not) and basic arithmetic operators (add, multiply, subtract, divide, modulo and bit-shifts). The method was discovered by Ville-Matias Heikkilä in 2011. Bytebeat is also sometimes regarded as a new genre of music.

In its purest form bytebeat doesn't use any scores, instruments, oscillators or samples, yet the generated songs show the presence of melody and rhythm which is often complex and polyrhythmic. It could be mysterious how one-line C programs produce such results. In the talk, it will be shown how musical and self-similarity properties of generated waveforms follow from the math properties of bitwise and arithmetic operations.

The further development of bytebeat technique for using it as a control source for various synthesizer parameters like pitch, amplitude, modulation, etc. will be presented. It’ll also be shown that simple formulas could be applied for generating sequences of MIDI notes that could feed any generic synthesizer or sampler.

Speakers
avatar for Alexander Obuschenko

Alexander Obuschenko

Independent
I’m a freelance mobile software engineer. While I’m not working on my client’s jobs, I’m doing audio and graphics programming. One of my recent music projects is a sequencer application with intrinsic support for polyrhythmic and pure intonation music. I would love to talk... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Interactive gestural software interaction developed in GranuRise
The demo will be based on the interactive gestural concepts used in the GranuRise project, which was developed in Max MSP.  You can find more about the project here  http://granurise.com.

The demo is going to be based on the gestural concepts with the addition of some other related topics and concepts such as:
- how to perceive and develop a virtual instrument to act less a software and more like a musical instrument
- as we know almost the whole electronic music is developed in a grid-based system, so an interesting approach presented also in the GranuRise project is how to interact in a more natural non-grid-based way using the gesture implementation
- an additional concept is also how to use a gesture implementation to build a unique sound design without the use of complex matrices and LFO schematics
- GranuRise features also an MPE integration which evolves even further the concept of an expressive gestural control
- The Roli Blocks implementation which offers seamless integration between hardware and software could be also presented as an interesting topic of how to think of software in a more modular way.

Speakers
avatar for Andrej Kobal

Andrej Kobal

Self-employed, Andrej Kobal
I'm a composer, sound designer, Max MSP programmer who is constantly present in Slovenia and Europe in various important sound art installations, costume build multi-media sound solutions and unique live performances. I have build the virtual instrument GranuRise which includes an... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Introducing your robot lead singer
To accompany the talk on singing synthesis (Track 4, Tuesday 19th 12pm), Chris will be on hand to take participants through CereProc's singing system, including:
  • The "Trump" demo - grab a microphone and croon to your heart's content, and prepare to be amazed (horrified) at our beloved POTUS sing right back at you
  • STARS transplant procedure - think your singing is bad? Scientifically confirm this first, before allowing us to switch your "bad" singing features with output from our professional singer model
  • Vox2XML service - convert a set of vocals and lyrics into CereVoice-compatible singing markup script
  • Listen to cutting-edge parametric singing synthesis via CereVoice, harnessing the latest in neural vocoders within speech technology, granting any of our available voices to sing with unparalleled naturalness and expressiveness

This demo session is intended to encourage participants to think about, explore and try out their own ideas of singing interface, as well as be given the opportunity to assess state-of-the-art neural waveform speech and singing synthesis from CereProc's broad range of voices.

Speakers
avatar for Christopher Buchanan

Christopher Buchanan

Audio Development Engineer, CereProc
Chris Buchanan is Audio Development Engineer for CereProc. He graduated with Distinction in the Acoustics & Music Technology MSc degree at the University of Edinburgh in 2016, after 3 years as a signal processing geophysicist with French seismic imaging company CGG. He also holds... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Melodrumatic - Using MIDI notes to pitch-shift via delay
Melodrumatic is an audio plugin that lets you "pitch-shift" via delay to turn unpitched audio into melodies. Controllable via MIDI or mouse :)

Currently available formats: VST3, AU, AAX, Unity

Speakers
avatar for David Su

David Su

Audio Programmer, Bad Dream Games
David Su is a musician, game developer, and researcher. Currently he does audio programming on Bad Dream Games' One Hand Clapping in addition to developing a suite of interactive songs with singer-songwriter-engineer Dominique Star. David recently released Yi and the Thousand Moons... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: MoveMIDI - 3D positional movement interaction with user-defined, virtual interface for music software
MoveMIDI Description
MoveMIDI is a piece of software that allows the user to create digital music using body movements. MoveMIDI communicates with 3D tracking hardware such as the PlayStation Move to translate users’ positional body movements into MIDI Messages that can be interpreted by music creation software like Ableton Live. With MoveMIDI, a user could “air-drum” their drum samples, or they could sweep a filter by sweeping their arm across their body. By using MoveMIDI in a performance context, electronic music performers can convey their actions to an audience through large body movements. Dancers could use MoveMIDI to create music through dance. Since many physical acoustic instruments require spatial movement of they body to play the instrument, the spatial interaction promoted by the MoveMIDI may help users interact with music software similarly to how they interact with acoustic instruments, leveraging preexisting knowledge. This spatial familiarity may also help an audience interpret a performer’s actions.

MoveMIDI allows the user to construct and interact with a virtual 3D instrument interface which can be played by moving the body relative to the virtual interface elements. This virtual interface can be customized by the user in layout, size, and functionality. The current implementation uses a computer screen to display the virtual 3D interface to the user while visualization via head mounted display is in development.

MoveMIDI software won the 2018 JUCE Award and was published as a “Late Breaking Work” paper/poster and interactive demonstration at the ACM CHI 2019 Conference in Glasgow.

See MoveMIDI.com for more information.

Demonstration Outline
The demonstration begins with a 1-2 minute explanation of MoveMIDI. Next, MoveMIDI’s 2 main modes are demonstrated: Hit Mode and Morph Mode. Hit Mode allows a user to hit virtual, spherical trigger points called Hit Zones. When Hit Zones are hit, they tigger musical notes or samples by sending MIDI Message signals. Morph Mode allows a user to manipulate many timbral characteristics of audio simultaneously by moving their arms within a predefined 3D Morph Zone. Movements in this zone send different MIDI Control Change messages per 3D axis. Next, a 1 minute performance using MoveMIDI is given. Finally, audience members are invited to voluntarily try MoveMIDI for themselves. A volunteer will be given the handheld controllers and may experiment and create music. This demonstration process will repeat as attendees move to further demonstrations.

Speakers
avatar for Tim	Arterbury

Tim Arterbury

CEO, TesserAct Music Technology LLC / Baylor University
I am a graduate student pursuing a Master's degree in computer science and researching human-computer interaction with music software. My latest project, MoveMIDI, uses in-air body movements to control music software. See more at MoveMIDI.com. I do a lot of C++ and JUCE coding and I have a passion for making music... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

DEMO: Multipurpose Rotary Knob with Torque-Feedback
Hands-on demo of a knob controller for smart human-machine interaction applications. 
You could try it yourself manipulating audio parameters of a mobile VJ software and customizing your preferred haptic responses.

Speakers
avatar for Francesco Martina

Francesco Martina

Hardware Designer, CERN
Francesco studied Electronics Engineering at the University of Pisa and Scuola Superiore Sant'Anna (Pisa, Italy). Since April 2017 he is employed in the Beam Instrumentation group of CERN (Geneva) under the supervision of Dr. Christos Zamantzas, working on the mixed-signal design... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Sound Control - Enabling fast and flexible musical interface creation for disability
This will be a hand-on demo on how to use the Sound Control software. Whilst it is designed primarily to meet the musical needs of children and young people with profound and multiple learning disabilities, the software can be used by anyone to make music, and be expressive in doing so, from keen amateurs through to seasoned professionals. 

There are a range of input control devices to choose from (e.g. legacy PS2 golf-game-controllers, motion sensors/BBC Micro bit etc.) and the software can produce musical outputs ranging from digitally synthesised sounds, use and transformation of samples, as well as MIDI capability to connect with commercial software and DAWs.

Simon will be speaking further about this 15.00 - 15.25 in track 3 (CMD).  

Speakers
avatar for Simon Steptoe

Simon Steptoe

Musical Inclusion Programme Manager, Northamptonshire Music and Performing Arts Trust
I currently work as Musical Inclusion Programme and Partnership Manager at Northamptonshire Music and Performing Arts Trust, which is the lead for the Music Education Hubs in Northamptonshire and Rutland.My role is to set up projects with children and young people in challenging circumstances... Read More →


Tuesday November 19, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB
 
Wednesday, November 20
 

12:30 GMT

Demo: Per-sample C: a live-coding environment for learning audio DSP in C/C++
C++ is the language of choice of audio developers and the industry. Its power flexibility and speed serve us well once we invest the time and effort to learn it. While many projects and technologies help to lower the barrier for entry or ease the learning curve, C++ remains very challenging for newcomers, especially for those aiming to learn audio DSP in terms of C++.

Per-Sample C is a C++ compiler environment for new students of audio programming. Based on LLVM/clang and (optionally) the Tiny C Compiler, it compiles a students short C/C++ program each time it changes, generating audio and visualizing the waveforms and spectrums of the resulting signals. The immediacy of the system satisfies the curiosity of the impatient student that might not otherwise choose to learn C++.

Per-Sample C does not try to be a comprehensive framework or tool, like JUCE, and it does not necessarily encourage "best practices" for writing C++, but it is a starting point for students who want to jump directly into audio signal processing in C++.

We recognize Viznut's "Algorithmic symphonies from one line of code" as the single greatest inspiration for this system. Our primary goal is education, but the composition and performance of bytebeat and demoscene music is a secondary aim. More generally, we motivated by questions like "what would GLSL/shaders for audio look like?" and immediacy in programming.

This demo explores techniques in audio synthesis, effects, and music composition in terms of the Per-Sample C environment. These include:
  • Frequency Modulation
  • Phase Distortion
  • Biquad, One-Pole, and Comb Filters
  • Delay and Reverb
  • Waveshaping
  • Sequencing and Envelope Generation

See <https://youtu.be/FdgVW8AOcMs> for more information.
 

Speakers
avatar for Karl Yerkes

Karl Yerkes

Lecturer in Media Arts and Technology, University of California Santa Barbara
I'm a hacker working on interactive and distributed audiovisual systems. I teach and direct an electroacoustic ensemble focus on new interfaces for musical expression. I was an artist in residence at the SETI institute.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Procedural audio models for video games with Chunity
The demo will showcase several procedural audio models made in ChucK and integrated in Unity via Chunity. The aim of the demo is to introduce the topic to the attendees, encouraging them to experiment with the models in the Unity scene by changing their parameters. The demo uses ChucK as it's a fast prototyping, easy to understand language and, with Chunity, features one of the easiest ways to generate sound synthesis within Unity. All the code will available in a GitHub repository.

Speakers
avatar for Adrián Barahona-Ríos

Adrián Barahona-Ríos

PhD Student, University of York
I am PhD student at the EPSRC funded Centre for Doctoral Training in Intelligent Games and Game Intelligence (http://iggi.org.uk), based in York. From 2018 and in collaboration with Sony Interactive Entertainment Europe, I am researching about strategies to increase the efficiency... Read More →


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Prototyping interactive software and installation art with SuperCollider and JavaScript
Software infrastructure built to share state between Node.js and SuperCollider processes, with specific case studies including 3 interactive art installations and a web based touch screen generative music tool.  Some concepts borrowed from the popular Redux JavaScript library and integrated into SuperCollider's pattern (Pbind) framework.

Speakers
avatar for Colin Sullivan

Colin Sullivan

Creative Technologist
I am a creative software developer with experience building music-related tools. Happy to chat software architecture esp. as it relates to systems, web technologies, and the future of real-time audio.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Real-time interface for audio generation using CVAEs
Speakers
avatar for Sam Parke-Wolfe

Sam Parke-Wolfe

Software Developer, Sam Parke-Wolfe
Sam is an interdisciplinary software developer specialising in digital audio, music and machine learning. Sam received a first class Bachelor of Science in Music Computing from Goldsmiths, University of London in 2017. His main passion is exploring methods of musical expression with... Read More →


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Recreating vintage divide-down synths
In 1983, Korg released the Poly-800, their reply to Roland's Juno-106. The Poly-800 was based on an arcade chip (MSM5232) and used a master clock divide-down system similar to drawbar organs and the Korg Delta before it. The synth adds together square waves to create an approximation to sawtooth waves, similar to how we can create a square wave by adding together sine waves. 

This simple technique makes a great starting point for your own DIY synth. At this demo you can explore how to build such a system whether with analogue parts, Arduinos, or in PureData. Or just come and play on a recreation of the Poly 800.

Speakers
avatar for David Blurton

David Blurton

Freelance audio developer
I build electronics and write DSP code from my home office in Reykjavik.Talk to me about reverb design!


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Seeing sounds - Creative audio/visual plugin experiments with JUCE
In this session Halldór and Ragnar talk about their collaborative quest for unique and original sounds through experiments with visual representation of audio. They present tools and prototypes created during this phase and a soon to be released audio plugin made with JUCE - a product of this explorative work.

Demo Outline
We will talk about the idea of creatively representing and manipulating audio in the visual domain - touching on other people's previous experiments and our study of them; inspirations, research and challenges of implementing real-time plugins with a specific visual aesthetic - not to mention the fickle relationship between the audio and visual domain. We'll show a series of playful experiments with image algorithms and graphical tools followed by a hands on demo and discussion.

Speakers
avatar for Halldór Eldjárn

Halldór Eldjárn

Audio developer, Inorganic Audio
I'm an Icelandic musician and a programmer. I write music, and work with other artists on creating new ways of expression. My main interest is augmenting creativity with technology, in a way that inspires the artist and affects her work.
avatar for Ragnar Hrafnkelsson

Ragnar Hrafnkelsson

Director | Developer, Reactify
I make software, tools and interactive experiences for music creation, consumption and performance.


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

12:30 GMT

Demo: Sieve - A plug-in for the intelligent browsing of drum samples using manifold learning
Sieve is an experimental audio plug-in that was designed to aid in the process of navigating and selecting from large collections of drum samples. Kick and snare drum samples are automatically organized based on sound similarity and visualized on a 2D grid interface inspired by hardware interfaces such as the Ableton Push. Organization of drum samples is achieved using audio feature extraction and manifold learning techniques. Development of the plug-in was based on research focussed on determining the best method for computationally characterizing kick and snare drum samples evaluated using audio classification taks.  A functioning version of Sieve will be available for demonstration alongside a poster presentation that will outline the research methodologies used, as well as the implementation of the plug-in that was carried out using the JUCE software framework.




Speakers
avatar for Jordie Shier

Jordie Shier

Student, University of Victoria
Jordie Shier is currently pursuing a master’s degree in computer science and music at the University of Victoria in Canada. He began research in this area at the end of his bachelor’s degree and has since presented work at the Workshop in Intelligent Music Production and AES NYC... Read More →


Wednesday November 20, 2019 12:30 - 14:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB