Loading…

Log in to bookmark your favorites and sync them to your phone or calendar.

Poster [clear filter]
Tuesday, November 19
 

10:30 GMT

Poster: GPU-accelerated physical model for real-time drumhead synthesis
Speakers
avatar for Harri Renney

Harri Renney

Student, UWE
I am a PhD student studying at the University of the West of England (UWE). My research investigates the use of Graphics Processing Units (GPUs) for accelerating the processing of digital audio related tasks.Come speak to me if you're interested in these areas!


Tuesday November 19, 2019 10:30 - 11:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

10:30 GMT

Poster: iPhone as an audiometer and precision audio reproduction and measurement device
In this poster, we present a new audiometer device developed for iPhones in collaboration with leading hearing scientists at a US-based therapeutics organisation. Capitalising on recent developments in this field, we have delivered a solution that is capable of accurate measurements similar to those achieved in clinical trials using expensive audiometers operated by specialist medical professionals. In the poster we will discuss:

  • Recent developments in this field
  • The DSP and software development details of the application
  • The specific challenges around achieving accurate sound reproduction and measurement using a mobile device
  • Performance of the application and the results of the clinical trials
  • Limitations of the application and further considerations
  • The wider context - how could an accurate, mass-market audiometer application help to prevent hearing loss and what could it mean for the research and development of new treatments

Speakers
avatar for David Gibson

David Gibson

Managing Director, FutureSonic Ltd
Expert mobile software consultant, specialising in audio and music technology. FutureSonic is a mobile software studio dedicated to everything audio, music and mobile. We work directly with companies around the globe to develop software that offers users new ways to create, perform... Read More →


Tuesday November 19, 2019 10:30 - 11:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

15:30 GMT

Poster: Embodied interaction with sound
Our poster discusses the use of gestural body movement for interaction with music software. While interaction with music software and virtual instruments has traditionally involved using Graphical User Interfaces (GUIs) and/or hardware MIDI controllers, we discuss the unique interactions and interfaces found in recent body movement controllers. Some of these controllers have been designed with the intention of being used in a performance context to provide visualization for an audience of the performer’s actions on the music. Another goal of these controllers is to ensure a high degree of expression during their use. Some of these controllers are used to make music creation more accessible for all people. The idea of body motion based interaction with electronic instruments is far from new, with inventions such as the Theremin existing since 1920. Today, interfaces that allow the use of mid-air body gestures to control music applications continue to become available such as the MI.MU Gloves, Wave MIDI Ring, Xbox Kinect, and the Leap Motion Controller. These interfaces are often accompanied by software tools enabling the user to design their custom gestural interactions with audio parameters.

Speakers
avatar for Balandino Di Donato

Balandino Di Donato

Lecturer in Creative Computing, University of Leicester
avatar for Tim	Arterbury

Tim Arterbury

CEO, TesserAct Music Technology LLC / Baylor University
I am a graduate student pursuing a Master's degree in computer science and researching human-computer interaction with music software. My latest project, MoveMIDI, uses in-air body movements to control music software. See more at MoveMIDI.com. I do a lot of C++ and JUCE coding and I have a passion for making music... Read More →


Tuesday November 19, 2019 15:30 - 16:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB
 
Wednesday, November 20
 

10:00 GMT

Poster: Using dynamic time warping to improve the classical music production workflow
The current music production workflow, comprising recording, editing, mixing, and mastering music, requires a great deal of manual work for the sound engineer. This process needs to be streamlined and there exist technologies that can do this. This poster presents a project that aims to bring some recent advances in Music Information Retrieval (MIR) techniques to music production tools in order to bridge this gap. The work presented here comes from a Master’s thesis project from the Music Technology Lab at the Massachusetts Institute of Technology (MIT).

The goal of this project is to explore all areas in the music production workflow (with a focus on classical music) that could benefit from digital signal processing-based tools, to build and iterate on these tools, and to transform the tools into products that are beneficial and easy to use. We collaborated with the Boston Symphony Orchestra (BSO) sound engineers to gather requirements for the project, which led to the identification of two potential tools: an automatic marking transfer (AMT) system and an audio search (AS) system. We have since then collaborated with other potential users for both AMT and AS tools, including sound engineers from radio stations in the Boston area. This has enabled us to identify additional workflows and finalize requirements for these tools. Based on these, we have created standalone applications for AMT and AS.

This poster will share the motivation for our work, the technical details of the design, implementation, and evaluation of AMT and AS, a demonstration of the tools, and future directions for this work.

Speakers
avatar for Smriti Pramanick

Smriti Pramanick

Student, Massachusetts Institute of Technology
I am a recent graduate of the Massachusetts Institute of Technology (MIT), where I majored in computer science and minored in music (BS ’18, MEng ’19). I am interested in the intersection of music and technology, particularly how we can use technology to improve our musical experiences... Read More →


Wednesday November 20, 2019 10:00 - 10:30 GMT
Newgate Room Puddle Dock, London EC4V 3DB

15:30 GMT

Poster: Freesound API: add 400k+ sounds to your plugin!
Freesound is a collaborative database where users share sound effects, field recordings, musical samples and other audio material under Creative Commons licenses. Freesound offers both a website to interact with the database and a RESTful API which provides programmatic browsing and retrieving of sounds and other Freesound content.

Freesound currently contains more than 415k sounds that have been downloaded 139M times by 9M registered users. We present a JUCE client library which permits an easy integration of Freesound in JUCE projects. The presented library allows, among other things, to make use of the advanced text and audio-based Freesound search engine, to download and upload sounds, and the retrieval of a variety sound analysis information (i.e. audio features) from all Freesound content.
The code, together with examples and documentation is available at github.com/MTG/freesound-juce! Come check our poster to get to know more about this library!

Speakers
avatar for António Ramires

António Ramires

PhD Researcher, MTG - Universitat Pompeu Fabra
I'm a PhD student in the Music Technology Group of Universitat Pompeu Fabra, a music maker as Caucenus and a radio host in Rádio Universidade de Coimbra. My main area of research is music and audio signal processing, together with machine learning, towards the development of interfaces... Read More →
avatar for Frederic Font

Frederic Font

Post-doctoral researcher, Music Technology Group, Universitat Pompeu Fabra
I'm a researcher and developer at the Music Technology Group (MTG), Universitat Pompeu Fabra, Barcelona. The MTG is one of the biggest music technology research groups in Europe. There I coordinate the Freesound website and all Freesound-related projects that we carry out.


Wednesday November 20, 2019 15:30 - 16:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB

15:30 GMT

Poster: Virtual analog modelling with automated nodal DK-method in real-time
OBS: Even if you don't have any previous knowledge in Virtual Analog Modelling, do join! You should however know basics of linear algebra, circuit schematics and solving of nonlinear systems.

The poster presents a prototyping framework for virtual analog modelling in the state-space domain. For this, the matrices that are needed for computation, are derived from circuits in a netlist representation in the application LTSpice.
Specifically, the following aspects are covered:
  • Basics of state-space modelling in the virtual analog domain and the challenges it imposes
  • Generating the matrices needed for computation
  • Debugging and testing framework in MATLAB®
  • Real-time implementation in the JUCE framework
  • Handling nonlinear circuit components
  • Current status of the project

Speakers
avatar for Daniel Struebig

Daniel Struebig

Student, Aalborg University Copenhagen
I'm very much interested in reverberation, occlusion/obstruction in the context of video games. Currently investigating the Project Acoustics at Microsoft. Apart from that, everything related to tools that empower sounddesigners in the context of video games.


Wednesday November 20, 2019 15:30 - 16:00 GMT
Newgate Room Puddle Dock, London EC4V 3DB