Loading…

Log in to bookmark your favorites and sync them to your phone or calendar.

Monday, November 14
 

8:30am GMT

Workshop Breakfast
Monday November 14, 2022 8:30am - 9:00am GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Workshop Welcome & Introduction
Workshop Welcome & Introduction

Monday November 14, 2022 9:00am - 9:30am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

9:30am GMT

Online Training Lab: Solving GPU Audio Processing Challenges, Parallelizing DSP Algorithms and Executing for Real-Time and Offline Rendering
GPU based audio processing has long been considered something of a unicorn in both the Pro Audio industry as well as the GPU industry. The potential for utilizing a GPU’s parallel architecture is both exciting and elusive, due to the number of computer science issues related to working with sequential DSP algorithm design and the fundamental differences between MIMD and SIMD devices. Now possible, GPU-processed audio can offer processing power for any audio application that is orders of magnitude greater than CPU counterparts; fulfilling a cross-industry need that has quickly arisen as digital media content adopts AI, ML, Cloud-based collaboration, virtual modeling, simulated acoustics and immersive audio (to name a few). The state of research had previously concluded that because of heavy latencies and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. Recognizing the need to create a viable, low-level standard and framework for Real-Time professional GPU audio processing, GPU AUDIO INC set out to solve these fundamental problems.


The purpose of this workshop is to give you a hands-on experience for what GPU Audio processing solves, and what it can mean for your software and the future of audio. It is a taste of the GPU Audio SDK coming soon.

In this course you will learn about the fundamental problems solved by the new GPU Audio standard, go deeper into our core technology, and learn how to incorporate Real-Time/low latency DSP algorithms into your projects. You will participate in a deep-dive hands-on tutorial in building a simple processor, implementing your own IIR processor, measure performance and playback, and “take home” the code to build an FIR processor. All made possible by the GPU Audio Scheduler. 

Prerequisite(s):
Familiarity with DSP algorithms and designs
Familiarity with modern SWE tools (IDEs, Git, CI/CD)

** This Training Lab is generously supported by NVIDIA & the Deep Learning Institute **

Speakers
avatar for Rumen Angelov

Rumen Angelov

Plugin Development Team Lead, GPU Audio
I've completed my education in Music And Audio Technology at the Bournemouth University, Dorset. Primarily experienced in audio plugin development for both Microsoft and Apple operating systems and the major plugin formats. Briefly worked on audio processing for proprietary ARM-based... Read More →
avatar for Andres Ezequiel Viso

Andres Ezequiel Viso

Product Manager, Braingines SA / GPU Audio Inc
I studied Computer Science at the University of Buenos Aires and received my PhD on semantics for functional programming languages. I did a posdoc at Inria, France, in the context of the Software Heritage project, developing the provenance index for the SWH Archive. My interest vary... Read More →


Monday November 14, 2022 9:30am - 12:30pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

9:30am GMT

Online Workshop: Dynamic Cast: Practical Digital Signal Processing
ADC X Dynamic Cast - Practical Digital Signal Processing

What is a digital audio signal? How do we generate them and in what ways can we manipulate and extract useful information from them? In this workshop we'll be exploring the life cycle of an audio signal from a continuous acoustic signal to a discrete digital signal. We'll explore practical methods for processing and shaping audio including: 
  • Sampling theory
  • Filtering 
  • Block vs sample-based processing 
  • Moving between the time and frequency domain

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.
Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
RL

Rachel Locke

C++ Software Engineer, Dynamic Cast
avatar for Anna Wszeborowska

Anna Wszeborowska

Freelance Software Engineer
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →


Monday November 14, 2022 9:30am - 12:30pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

9:30am GMT

Training Lab: Solving GPU Audio Processing Challenges, Parallelizing DSP Algorithms and Executing for Real-Time and Offline Rendering

GPU based audio processing has long been considered something of a unicorn in both the Pro Audio industry as well as the GPU industry. The potential for utilizing a GPU’s parallel architecture is both exciting and elusive, due to the number of computer science issues related to working with sequential DSP algorithm design and the fundamental differences between MIMD and SIMD devices. Now possible, GPU-processed audio can offer processing power for any audio application that is orders of magnitude greater than CPU counterparts; fulfilling a cross-industry need that has quickly arisen as digital media content adopts AI, ML, Cloud-based collaboration, virtual modeling, simulated acoustics and immersive audio (to name a few). The state of research had previously concluded that because of heavy latencies and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. Recognizing the need to create a viable, low-level standard and framework for Real-Time professional GPU audio processing, GPU AUDIO INC set out to solve these fundamental problems.

The purpose of this workshop is to give you a hands-on experience for what GPU Audio processing solves, and what it can mean for your software and the future of audio. It is a taste of the GPU Audio SDK.

In this course you will learn about the fundamental problems solved by the new GPU Audio standard, go deeper into our core technology, and learn how to incorporate Real-Time/low latency, GPU-executed DSP algorithms into your projects. You will participate in a deep-dive hands-on tutorial in building a simple processor, implementing your own IIR processor, measure performance and playback, and “take home” the code to build an FIR processor. All made possible by the GPU Audio Scheduler. 

Prerequisite(s):
Familiarity with DSP algorithms and designs
Familiarity with modern SWE tools (IDEs, Git, CI/CD)
Note: a basic primer on elements of CUDA will be included in this workshop.

** This Training Lab is generously supported by NVIDIA & the Deep Learning Institute **

Speakers
avatar for Rumen Angelov

Rumen Angelov

Plugin Development Team Lead, GPU Audio
I've completed my education in Music And Audio Technology at the Bournemouth University, Dorset. Primarily experienced in audio plugin development for both Microsoft and Apple operating systems and the major plugin formats. Briefly worked on audio processing for proprietary ARM-based... Read More →
avatar for Andres Ezequiel Viso

Andres Ezequiel Viso

Product Manager, Braingines SA / GPU Audio Inc
I studied Computer Science at the University of Buenos Aires and received my PhD on semantics for functional programming languages. I did a posdoc at Inria, France, in the context of the Software Heritage project, developing the provenance index for the SWH Archive. My interest vary... Read More →



Monday November 14, 2022 9:30am - 12:30pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

9:30am GMT

Workshop: Build Your First Audio Plug-in with JUCE

Writing an audio plug-in can be a daunting task: there are a multitude of plug-in formats and DAWs, all with slightly different requirements. This workshop will guide you through the process of creating your first audio plug-in using the JUCE framework.

This workshop will cover:
- An introduction to JUCE
- Configuring a plug-in project
- Adding parameters to your plug-in and accessing them safely
- Creating a basic GUI
- Debugging and testing your plug-in

During the workshop, attendees will create a simple audio plug-in under the guidance of the JUCE developers.

Workshop Requirements:

Attendees must be able to compile the projects supplied in the most recent JUCE SDK using the corresponding IDE for their computer: Visual Studio 2022 for Windows, Xcode for macOS, and a Makefile for Linux. This may require installing Visual Studio 2022, Xcode or all of the Linux dependencies. There will not be time to do this within the workshop itself.

You can clone JUCE using git from here https://github.com/juce-framework/JUCE, or download the latest version of JUCE here https://github.com/juce-framework/JUCE/releases/latest.

Windows: Open JUCE\extras\AudioPluginHost\Builds\VisualStudio2022\AudioPluginHost.sln and build in Visual Studio 2022.

macOS: Open JUCE/extras/AudioPluginHost/Builds/MacOSX/AudioPluginHost.xcodeproj and build in Xcode.

Linux: Run make in JUCE/extras/AudioPluginHost/Builds/LinuxMakefile.

Download the workshop materialshttps://data.audio.dev/workshops/2022/build-first-plugin-with-juce/materials.zip

Speakers
avatar for Tom Poole

Tom Poole

Director, JUCE
Tom Poole is a director of the open source, cross platform, C++ framework JUCE (https://juce.com). Before focussing on JUCE he completed a PhD on massively parallel quantum Monte-Carlo simulations of materials, and has been a foundational part of successful big-data and audio plug-in startups... Read More →
avatar for Reuben Thomas

Reuben Thomas

Software Engineer, JUCE
Reuben has been a JUCE user since 2013, using it to build a room-acoustics simulator during his MA (Res) at the University of Huddersfield, audio analysis tools at IRCAM, and consumer music software at ROLI. In early 2020, Reuben became a full-time maintainer of the JUCE framework... Read More →
avatar for Attila Szarvas

Attila Szarvas

C++ Software Engineer, JUCE
I studied electrical engineering and got drawn into signal processing and software development while working on active noise cancelling research topics. I've been working ever since as a programmer in various fields, but the most fun I had was doing audio plugin development in the... Read More →
avatar for Oliver James

Oliver James

C++ Software Engineer, JUCE
Hi!I'm Oli.Before joining the JUCE team, I used the JUCE framework to create real-time audio platforms and various 'tooling' plugins. I've also worked on low-latency audio/visual generation tools and networking tools.


Monday November 14, 2022 9:30am - 12:30pm GMT
5) BACKSPACE

9:30am GMT

Workshop: Dynamic Cast: Practical Digital Signal Processing

ADC X Dynamic Cast - Practical Digital Signal Processing

What is a digital audio signal? How do we generate them and in what ways can we manipulate and extract useful information from them? In this workshop we'll be exploring the life cycle of an audio signal from a continuous acoustic signal to a discrete digital signal. We'll explore practical methods for processing and shaping audio including: 
  • Sampling theory
  • Filtering 
  • Block vs sample-based processing 
  • Moving between the time and frequency domain

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.
Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Anna Wszeborowska

Anna Wszeborowska

Freelance Software Engineer
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
RL

Rachel Locke

C++ Software Engineer, Dynamic Cast


Monday November 14, 2022 9:30am - 12:30pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

9:30am GMT

Workshop: Elk Audio OS: hassle-free embedded development on your computer

Working with embedded hardware is challenging, and the typical developer workflow is generally much slower compared to writing audio software on a general purpose computer.
Elk Audio OS is a low-latency embedded Linux distribution, and set of user-space tools, using which the process is significantly streamlined.
In this workshop, we present a set of tools built around Elk's audio engine and plugin host SUSHI, which can be used to prototype an audio product entirely on your computer, without the hassles of dealing with an embedded hardware platform.
The attendees are expected to learn:
  • How to set up a chain of plugins
  • How to write a control application that uses SUSHI's API to manipulate the audio graph and its parameters
  • How to implement the control of your future embedded device, both using physical controls, remote GUI’s, and end-user development tools
  • How to use additional tools to monitor performance and problems
In the second part of the workshop, the participants will use the same tools to create a prototype of an embedded audio device, such as a simple synthesiser or stompbox pedal. There are going to be a few Elk hardware units available for those who then want to run their experiments on real hardware.
Requirements for the participants:
  • A macOS laptop (10.15 or later), or a Linux laptop with a recent distribution and the JACK audio server installed
  • Basic knowledge of one of the two languages that will be used for control client examples:
  1. Python (recommended)
  2. C++
Optional requirements:
  • Small MIDI controller (for synthesiser examples)
  • Development environment for writing your own plugins in e.g. JUCE
  • Having installed the Elk Audio OS SDK if you want to cross-compile for the real HW units

Speakers
avatar for Stefano Zambon

Stefano Zambon

CTO, Elk
Wearing several hats in a music tech startup building Elk Audio OS. Loves all aspects of music DSP from math-intense algorithms to low-level kernel hacking for squeezing latency and performance.
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk Audio
Senior Software Engineer, ElkComputer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live music, audiovisual performance, and use by experts, mainly using C++.I get... Read More →
avatar for Maxime Gendebien

Maxime Gendebien

Python Developer, Elk Audio
The road that led Max to be a full-time Python developer is not a straight one. Previous careers include jazz guitarist, recording engineer and mixing engineer which opened the doors of code through Arduino and Max-MSP. It's only after moving his family to Sweden that he fully committed... Read More →


Monday November 14, 2022 9:30am - 12:30pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

12:30pm GMT

Workshop Lunch
Monday November 14, 2022 12:30pm - 2:00pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Online Workshop: Dynamic Cast : Practical Software Engineering
ADC X Dynamic Cast - Practical Software Engineering

In the workshop we will share some techniques used in everyday programming work in order to prepare you for contributing to large code bases we deal with in professional contexts.

We will discuss how to read and analyze code, find entry points into a complex system in order to add features or debug problems as well as make sure your code is well-designed and therefore easy to maintain, change or build upon. We’ll also look into building and sharing your programs.

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.

Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
RL

Rachel Locke

C++ Software Engineer, Dynamic Cast
avatar for Anna Wszeborowska

Anna Wszeborowska

Freelance Software Engineer
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →


Monday November 14, 2022 2:00pm - 5:00pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Workshop: Analog Circuit Modelling for Software Developers using the Point-To-Point Library

During this workshop, participants will learn about digital modeling of analog circuits. This will be applied to the creation of several JUCE plug-ins. Traditional modeling techniques will be discussed along with the presentation of a circuit analysis library which automates the modeling process. This library, called "Point-To-Point Modeling," is intended for audio software developers interested in rapid prototyping and implementation of circuit modeling. Example JUCE plug-ins using the Point-To-Point library will be demonstrated, along with the process of quickly converting arbitrary schematics into C++ code.

  • Attendees should have some experience using JUCE
Code repository for the workshop:
https://github.com/HackAudio/PointToPoint_LT
Code repository as an additional resource:
https://github.com/HackAudio/PointToPoint_MATLAB

Speakers
avatar for Eric Tarr

Eric Tarr

Dr. Eric Tarr teaches classes on digital audio, computer programming, signal processing and analysis at Belmont University. He received a Ph.D., M.S., and B.S. in Electrical and Computer Engineering from the Ohio State University. He received a B.A in Mathematics and a minor in Music... Read More →


Monday November 14, 2022 2:00pm - 5:00pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Workshop: Apple Audio Office Hours

New to developing audio applications for Apple platforms? Need help getting your Audio Unit to work properly? Not sure how to integrate MIDI into your application? Bring your laptop, code, and questions to this workshop to get help from Apple audio experts. From low latency, real-time APIs for audio I/O, to Audio Unit instruments and effects, to CoreMIDI and beyond, Apple platforms provide a rich set of APIs for creating anything from simple audio playback applications to sophisticated digital audio workstations.  

Speakers
TG

Tony Guetta

Software Engineering Manager, Apple Inc.
DW

Doug Wyatt

Audio API Architect, Apple
avatar for Marc Boucek

Marc Boucek

Core Audio Software Engineer, Apple


Monday November 14, 2022 2:00pm - 5:00pm GMT
5) BACKSPACE

2:00pm GMT

Workshop: Dynamic Cast : Practical Software Engineering

ADC X Dynamic Cast - Practical Software Engineering

In the workshop we will share some techniques used in everyday programming work in order to prepare you for contributing to large code bases we deal with in professional contexts.

We will discuss how to read and analyze code, find entry points into a complex system in order to add features or debug problems as well as make sure your code is well-designed and therefore easy to maintain, change or build upon. We’ll also look into building and sharing your programs.

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.

Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Anna Wszeborowska

Anna Wszeborowska

Freelance Software Engineer
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
RL

Rachel Locke

C++ Software Engineer, Dynamic Cast


Monday November 14, 2022 2:00pm - 5:00pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Workshop: Generating Audio Code with Max/MSP

This workshop will lead you step-by-step through the process of exporting high-performance audio source code from Cycling ’74’s Max/MSP visual programming environment. We’ll show you how to create VST plug-ins and Raspberry Pi-based instruments by combining C++ DSP code generated from Max with JUCE and other frameworks. We’ll also explore strategies for integrating your own algorithms into the Max code generation system.

Attendees will leave with practical experience in the Max/MSP code export workflow as well as the basics of how to integrate our generated code into your audio programming projects.

To get the most out of this workshop, we recommend making sure that you've installed git, node, and cmake. While Max and RNBO aren't necessary, they are highly recommended, and workshop attendees can get a free, temporary license of both Max and RNBO. Please contact the event organizer ahead of time if you'd like to take advantage of the free license offer.

Speakers
avatar for David Zicarelli

David Zicarelli

CEO, Cycling ‘74
David Zicarelli is a computer programmer and improvising musician who designs interactive software to support creative expression. He has been working on Max and Max-related projects since the late 1980s. Prior to his Max life he created one of the first graphical voice editors for... Read More →
avatar for Sam Tarakajian

Sam Tarakajian

Product Engineer, Cycling'74
Sam Tarakajian is a Brooklyn-based developer and artist, focusing on interface design for musical and creative tools. He's best known for his YouTube tutorial series, "Delicious Max/MSP", where he tries, sometimes successfully, to bring humor to teaching Max. As an engineer at Cycling... Read More →


Monday November 14, 2022 2:00pm - 5:00pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

4:00pm GMT

Online Open House
We will be opening our virtual venue hosted on Gather Town to online attendees so that they can connect ahead of time to test things out, get familiar with the online conferences systems, as well as chat, socialize and interact with other attendees through a dynamic video chat system. Explore the venue, interact and have fun!

We will also open up access to the online conference web lobby page so you can also test this out and verify you are able to access the systems ahead of the event starting on Monday morning.

Online tech support will be available for the duration of this session, so we highly recommend all attendees take this opportunity to verify they can access the systems and troubleshoot any technical issues which might otherwise prevent or slow down access to the event.

Monday November 14, 2022 4:00pm - 5:00pm GMT
Gather Town

4:00pm GMT

Socialize, Network & Explore The Virtual Venue
Interact with other attendees, visit our numerous exhibitors and their interactive exhibition booths and take part in a fun puzzle treasure hunt game during breaks in our scheduled content! Have you visited the cloud lounge yet?

Monday November 14, 2022 4:00pm - 5:00pm GMT
Gather Town

5:10pm GMT

Workshop Panel: Audio Industry Health Check: A Conversation on the Current State of the Audio Industry and Where It May Be Headed Next
Come join us and listen to our fireside chat-style panel with audio software and hardware industry leaders as we explore the current state of the audio industry.

This panel will explore some of the key challenges in audio software and hardware development today as well as new technologies and trends that are shaping and/or disrupting its future.

Speakers
avatar for Pete Goodliffe

Pete Goodliffe

CTO, inMusic Brands
Experienced software developer, architect/product designer, leader, columnist, speaker, and author. Herder of cats and shepherd of products. Specialises in Music Industry projects, often involving high-quality C++ on desktop and embedded platforms, and iOS development. Currently... Read More →
avatar for Angus Hewlett

Angus Hewlett

CTO, Image-Line Group
avatar for Rebekah Wilson

Rebekah Wilson

CEO, Source Elements LLC
Rebekah is the technical co-founder and CEO who co-created the entire suite of Source Elements software. With a degree in music composition and a lifetime of love for technology, Rebekah has focused her career as a composer, electronic music researcher, and software developer - with... Read More →
avatar for Anna Wszeborowska

Anna Wszeborowska

Freelance Software Engineer
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →
avatar for Tim Carroll

Tim Carroll

CEO, Focusrite
Tim is the CEO of the Focusrite Group, a publicly listed company on the UK’s AIM market. Since joining Focusrite in 2017, the Group has grown significantly from 2 brands to 10, and revenue and profit have tripled. Tim has been in the music industry all of his adult life, starting... Read More →


Monday November 14, 2022 5:10pm - 6:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

6:00pm GMT

ADC Welcome Evening
Come one, come all, to the ADC Welcome Reception! Whether it’s your first time attending ADC, or your eigth, meet and chat with fellow attendees at an informal gathering the night before the 8th Audio Developer Conference!

If you are new to ADC, this will be a wonderful opportunity to get to know more community members! Meet some new friends, and see them the very next day at the conference! Members of the ADC team will be there to welcome you, and pleased to make some friendly introductions.

If you are already well connected, we invite you to help us welcome new folks and make them feel comfortable among us.

Monday November 14, 2022 6:00pm - 9:00pm GMT
Strongroom Bar 120-124 Curtain Rd, London EC2A 3SQ, UK
 
Tuesday, November 15
 

8:00am GMT

Breakfast
Tuesday November 15, 2022 8:00am - 8:30am GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

8:30am GMT

Welcome Address
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Tuesday November 15, 2022 8:30am - 8:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Implementing Real-Time Parallel DSP on GPUs
GPU powered audio has long been considered something of a unicorn in both the pro audio & accelerated computing industries alike. The implications of powering accelerated DSP via a GPU’s parallel architecture is simultaneously exciting and incredibly frustrating; to many it would seem that the ease of which they handle massive amounts of tasks is rivalled only by the difficulty of understanding their architecture, in particular for the average DSP developer. Until now, the state of research has always concluded that because of heavy latency and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. This is no longer the case.

The implications and use-cases are great: ultra fast plugins, scalable power, hundreds or even thousands of channels at low latency, exponentially better software performance (10x-100x), cloud processing infrastructure, accelerated AI/ML and more. GPUs can now offer a bright future for DSP. In this talk we will share about the challenges and solutions of GPU based DSP acceleration. 

  1. Why GPUs?
  2. 3 Challenges of GPU-based Audio Processing
    - Parallelism and Heterogeneity
    - Multiple Tracks and Effects
    - Data Transfer Problems: GPU <> CPU
  3. Core Component Overview: The Scheduler
    - Host Scheduler and Device Scheduler
    - How Scheduler Addresses the “3 Challenges”
  4. Some Examples: FIR and IIR Algorithms - Can They Be Parallelized?
    Algorithmic and Platform Optimization
    GPU Audio Workflow Schematics
    - GPU Audio Component
    - DSP API
    - Processor API
    - DSP Components Library
  5. Roadmap and Some Use Case Considerations
  6. Q&A and Invitation to Training Lab (Gain, IIR and FIR Convolver Hands-On Training Lab)

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Rumen Angelov

Rumen Angelov

Plugin Development Team Lead, GPU Audio
I've completed my education in Music And Audio Technology at the Bournemouth University, Dorset. Primarily experienced in audio plugin development for both Microsoft and Apple operating systems and the major plugin formats. Briefly worked on audio processing for proprietary ARM-based... Read More →
avatar for Andres Ezequiel Viso

Andres Ezequiel Viso

Product Manager, Braingines SA / GPU Audio Inc
I studied Computer Science at the University of Buenos Aires and received my PhD on semantics for functional programming languages. I did a posdoc at Inria, France, in the context of the Software Heritage project, developing the provenance index for the SWH Archive. My interest vary... Read More →


Tuesday November 15, 2022 9:00am - 9:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Introduction to the Audio Definition Model and its use for Spatial Audio and Next Generation Audio experiences
Spatial Audio has gone way beyond the good old 5.1 system and is reaching mainstream audiences through mobile phones or TV soundbars. The Spatial audio content creation ecosystem and workflows are now developing at a rapid pace around the ADM-BW64 file format, now supported by major DAWs. This talk aims at presenting the key concepts of the Audio Definition Model (ADM), and its benefits for studio, live and broadcast workflows, and why it is interesting as an inter-operable model for spatial audio in general. Beyond spatial audio, ADM also enables use-cases such as audio personalization and interactivity, which will be highlighted.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Guillaume Le Nost

Guillaume Le Nost

Managing Director, L-acoustics UK Ltd
Shaping the future of live sound with immersive audio technologies and innovatives Sound Experiences.Interests in spatial audio, object-based audio, creative technologies, music technology and live sound.Keen musician (flute, bass, piano).
DM

David Marston

Senior R&D Engineer, BBC


Tuesday November 15, 2022 9:00am - 9:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Recent Trends in Virtual Analog Modeling Based on Nonlinear Wave Digital Filters
Virtual analog modeling is the practice of digitally emulating analog audio gear. For many years, because of the limited amount of computational power, only the emulation of nonlinear audio circuits containing few one-port nonlinearities was tabled. Recently, with the rise of high-performing processors, different techniques have been proposed to accomplish the emulation of increasingly complex circuits. Among white-box methods, Wave Digital Filters (WDFs) are gaining resounding success thanks to their remarkable features, such as stability, accuracy, and efficiency. This talk first provides background knowledge on WDFs and then addresses the latest research trends and developments regarding the emulation of audio circuits with multiple one-port nonlinearities in the Wave Digital domain.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Riccardo Giampiccolo

Riccardo Giampiccolo

PhD Candidate, Politecnico di Milano
Riccardo Giampiccolo received both the B.S. and the M.S. degrees in electronics engineering from the Politecnico di Milano, Italy, in 2017 and 2020, respectively. He is currently a PhD Candidate in information technology at the Dipartimento di Elettronica, Informazione e Bioingegneria... Read More →



Tuesday November 15, 2022 9:00am - 9:50am GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Using Smart DSP to Prevent 1.1 Billion People from Severe Hearing Loss while Enhancing the Overall Experience
The World Health Organisation estimates that 1.1 billion people worldwide are at risk of severe hearing loss as a result of loud sounds in recreational settings. Considering that headphones are now capable of deafening sound levels and that entertainment content is accessible at our fingertips anywhere and anytime, this comes as no surprise. 

While this is a frightening fact, very little is being done to prevent it from becoming the next big issue. The only thing we hear is "keep the volume down", but most of us ignore it. Why is this? What makes more than a billion people prefer a louder listening experience to a safer one? We believe that the answer can be found in psychoacoustic research, and not in compulsive self-harming behaviour. (In fact, it has been shown that our ears pick up more on low and high frequencies at high listening levels than at low listening levels.) 

It is our mission to create immersive, exciting listening experiences at safe levels. With the use of smart DSPs based on psychoacoustic research, we have worked continuously since 2018 to find effective solutions for making it all happen.



IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Luigi Cosi

Luigi Cosi

Chief Executive, OIKLA


Tuesday November 15, 2022 9:00am - 9:50am GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Better Adaptive Music for Starship Troopers
Kejero's glossary: kejero.com/adc

Today's common adaptive music techniques often still fall short when it comes to melodic or orchestral video game scores.

Can you hear the transition to another track? Or an instrument fading in? That's the sound of a system at work.

In this talk, Kejero will explain how he designed a system that doesn't use transitions. He will show how they implemented it in Starship Troopers: Terran Command, which required a massive orchestral score. Music that can switch between calm and chaotic in a heartbeat, yet still sounds like one cohesive, intentional piece of music.

Outline:
  • Level 0: Existing Techniques
  • Level 1: A New Foundation
  • Level 2: An Intelligent Conductor
  • Level 3: Extended Techniques
  • Two-way Communication
  • 10 Extremely Practical Tips & Tricks

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Kejero

Kejero

Composer, Kejero
Kejero (www.kejero.com) recently provided the massive orchestral score for Starship Troopers: Terran Command. He reinvented video game scoring with his Better Adaptive Music software. With BAM, the music can switch between calm and chaotic in a heartbeat, yet still sound like one cohesive, intentional pie... Read More →


Tuesday November 15, 2022 10:00am - 10:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Neutone - Real-time AI audio plugin for DAWs
When music and audio creators want to experiment with the latest AI technologies such as deep learning, there is currently a steep learning curve for programming in Python with libraries such as PyTorch and using dedicated processing hardware such as GPU. Neutone solves these existing challenges by developing a VST/AU plugin that can be run in real-time on CPU in DAWs. A number of timbre transfer models and deep learning powered effects have already been published for this plugin and can be downloaded at runtime.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Andrew Fyfe

Andrew Fyfe

AI Application Engineer, Qosmo, Inc
Andrew is Tech Lead on the Neutone project at Qosmo, Inc. He is an AI Application and DSP engineer who has worked at Krotos Audio, Audio Imperia and Otago Engineering, developing innovative audio software for music/sound production. His PhD research focuses on Neural Audio Synthesis... Read More →
avatar for Christopher Mitcheltree

Christopher Mitcheltree

PhD Student / Research Engineer, Queen Mary University of London / Qosmo
I'm Christopher, a research engineer at Qosmo and PhD student in the Artificial Intelligence and Music (AIM) program at Queen Mary University of London.My research area is representation learning for modulations in synthesizers and audio effects and how they relate to audio production... Read More →



Tuesday November 15, 2022 10:00am - 10:50am GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Thread synchronisation in real-time audio processing with RCU (Read-Copy-Update)
When developing real-time audio processing applications in C++, the following problem arises almost inevitably: how can we share data between the real-time audio thread and the other threads (such as a GUI thread) in a way that is real-time safe? How can we synchronise reads and writes to C++ objects across threads, and manage the lifetime of these objects, while remaining wait-free on the real-time thread?

In certain cases, we can store the objects inside a std::atomic or use techniques such as lock-free FIFOs, spinlocks, and double buffering. However, typically in the generic case of reading the value of a sufficiently large, persistent object on the real-time thread that is simultaneously mutated on another thread, none of these are applicable. What do we do then? How do we avoid memory leaks while ensuring that the real-time audio thread won't end up blocking, performing deallocations, or reading an object that has already been deleted from under it?

One possibility is to use atomic_shared_ptr, but correct and lock-free implementations are hard to come by. And even if you have such an implementation, this approach typically suffers from slow performance, has poor portability across platforms, and introduces even more complexity. Is there an alternative solution?

If we look beyond the audio industry, it turns out there is actually another strategy that solves this problem quite elegantly: RCU (Read-Copy-Update). RCU has been successfully used in the Linux kernel for two decades. More recently, it has been adapted for user space applications as well. There is even a proposal to add RCU to the C++ standard library.

In this talk, we take a detailed look at the RCU (Read-Copy-Update) mechanism. We discuss how it works, what the tradeoffs and design choices are, and how to adapt the algorithm for a real-time audio context.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Timur Doumler

Timur Doumler

Developer Advocate, JetBrains
Timur Doumler is C++ Developer Advocate at JetBrains and an active member of the ISO C++ standard committee. As a developer, he worked many years in the audio and music technology industry and co-founded the music tech startup Cradle. Timur is passionate about building inclusive communities... Read More →


Tuesday November 15, 2022 10:00am - 10:50am GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

10:00am GMT

“Constexpr ALL the Things!” in Audio Programming
The most effective moment for a developer to receive feedback on their code is at the very moment they are composing it; however, compile-time feedback is most often limited to immediate, surface-level concerns (missing semicolons, incorrect types being returned, etc.). Leveraging the latest features of C++20, this talk will walk the audience through writing unit tests and regression tests for audio plugins that run instantly and perpetually at compile-time. Along the way, we will review the history of compile-time programming in C++, the additional free benefits from writing code that can be run at compile-time, and we will look ahead to what else we will be able to accomplish in C++23. Audio developers will be able to catch bugs in their code immediately after writing them and sleep soundly at night assured that their changes have not inadvertently altered the audio processing of their product.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Adam Shield

Adam Shield

DevOps Engineer / Audio Plug-in Developer, Antares Audio Technologies (Auto-Tune)


Tuesday November 15, 2022 10:00am - 10:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

10:50am GMT

Break
Tuesday November 15, 2022 10:50am - 11:20am GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Detaching the UI - Options and challenges for controlling headless and remote audio software
Audio software running in “headless” or remote contexts, i.e. without access to a tightly integrated GUI, is increasingly common, either when running in embedded devices, on a remote cloud server, or distributed over a local network where remote/automated control is desired.

The parameter controls exposed over plugin API’s are insufficient, since the practically usable implementations only support a fraction of the variety necessary. Developers expose many additional controls over the GUI, which doesn’t translate to headless or remote uses.

Plugin GUI’s can be as involved as a fully-fledged DAW, exposing complex interactions. Although MIDI 2.0 will in the medium term replace some of what we discuss, even with full adoption, it doesn’t cover the many interactions possible from a GUI.

In this article we discuss three basic Distributed Systems patterns, for controlling audio software during run-time over a network: simple socket messaging, request/response, and publish/subscribe.

We also demonstrate their implementation using the OSC and gRPC frameworks, discussing challenges and best practices specific to real-time audio.

Grounding the above, we provide a pair of ready to use, fully fledged open-source applications implementing our suggestions, both available to download.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk Audio
Senior Software Engineer, ElkComputer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live music, audiovisual performance, and use by experts, mainly using C++.I get... Read More →
avatar for Gustav Andersson

Gustav Andersson

Senior SW Dev, Elk Audio
Will code C++ and python for fun and profit. Developer, guitar player and electronic music producer with a deep fascination with everything that makes sounds in one form or another. Currently on my mind: modern C++ methods, DSP algos, vintage digital/analog hybrid synths.


Tuesday November 15, 2022 11:20am - 12:10pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Synchronizing audio on the web
This talk will focus on how multiple audio sources can be synchronized on the web. That includes multiple audio sources on a single page as well as multi device scenarios in which the same website is used on multiple devices simultaneously. The talk will also cover how audio on the web can be synchronized with external sources like Ableton Link, MIDI clock or MIDI timecode.

Typical use cases for synchronized audio apps are listening parties, multi device playback systems or interactive music applications.

The idea of this talk is to give an overview of the various APIs, services and protocols that can be used to synchronize audio depending on the required level of accuracy. It will also show some practical examples.
Syncing audio with video will also be covered briefly since many of the browser APIs work very similar for audio and video anyway.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Christoph Guttandin

Christoph Guttandin

Web Developer, Media Codings
I'm a freelance web developer specialized in building multi media web applications. I'm passionate about everything that can be used to create sound inside a browser. I've recently worked on streaming solutions and interactive music applications for clients like TV stations, streaming... Read More →


Tuesday November 15, 2022 11:20am - 12:10pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Trying to create an Audio Software House
Creating audio software is hard. Creating an audio software company is even harder. This session will cover our experience, going from a single dev to 20 devs, working in multiple teams, and all the pains and hacks that we had in the process: pipeline, tests, management, tools, processes, monorepo, etc. This is not about what is the correct way of doing things, but about a journey and the choices and pains of that journey.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Nuno Fonseca

Nuno Fonseca

CEO, Sound Particles, S.A.
avatar for Vitor Carreira

Vitor Carreira

CTO, Sound Particles, S.A.


Tuesday November 15, 2022 11:20am - 12:10pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Connecting Audio Tools for Game Development
It is common for game titles to include more than 50000 audio files, ranging from sound effects, dialogue lines and music clips. One of the challenges, for game developers, is to build an efficient pipeline for creating, organizing and processing such a massive number of audio files, generally coming from different tools. 
For instance, REAPER has become a popular choice in the game audio community as a sound design DAW. Indeed, it can handle large projects and can be adapted and integrated into different workflows. How can we help the creators focus on content and game experience by reducing repetitive and error prone tasks that are currently part of their daily work?
In this talk, we explore the technologies (Juce, WAAPI) and the architecture of ReaWwise, a Wwise integration into REAPER, focused on automation. We discuss how we can connect different tools, including other DAWs, to Wwise from various environments. We will also take an in-depth look at how we implemented WAAPI and WAQL as the core of the extensibility of Wwise.
  • Presentation of speakers
  • Quick overview of what is Wwise 
    • Interactive Audio Challenges
  • Common sound designer example
    • Reviewing pain points
  • Quick demo of what is accomplished by ReaWwise
    • Create object structures/hierarchies and wwise
    • Import audio files into wwise
  • REAPER - Why?
  • ReaWwise Tech Overview
    • CMake
    • Juce
    • WAAPI
    • Components: GUI, DawContext, Data
    • Evaluating how other DAWs could be extended?
  • WAAPI Tech Overview
    • Architecture
    • WAQL: a query language
    • Use cases - Examples
    • Benefits of a generic data model
  • Closing Remarks
    • The project being open source
    • Exposing WAAPI to ReaScript/Lua


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Bernard Rodrigue

Bernard Rodrigue

Director, Wwise Experience, Audiokinetic
Bernard Rodrigue is Director, Wwise Experience at Audiokinetic. He joined Audiokinetic in 2005 and actively participated in developing the foundations of Wwise. Today, Bernard continues to lead several projects related to the advancement and expansion of Wwise.
avatar for Andrew Costa

Andrew Costa

Software Developer, Audiokinetic
Andrew Costa is a Software developer at Audiokinetic since 2021. He’s been working on daw extensions and plugins, namely ReaWwise and the Wwise VST plugins. He's passionate about software development, daws and music production.


Tuesday November 15, 2022 11:20am - 12:10pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

10 Things Every ARA Programmer Should Know
ARA (Audio Random Access) is an API created by Celemony and PreSonus to enable a new class of audio plug-ins that are not used in realtime effect slots, but instead are tied into the arrangement of the DAW.

It is designed for plug-ins such as Melodyne which intrinsically need to evaluate the audio material in its entirety, not sliced into small realtime buffers. In addition to providing random access to the audio samples, ARA enables bi-directional communication about musical properties such as tempo maps, time and key signatures, or chord progressions of both the original audio material and the playback context.

Rather than doing a detailed dive into the API, the talk will focus on several core concepts of ARA that have a profound impact on the design of your code. It strives to give you a better idea about both the features and workflows that users will expect from ARA products, and the costs and liabilities involved. It will enable you to make an educated decision about whether or not ARA is the right tool for your product, and get you started with the right mindset should you go for it.

If you’re interested in this talk, please also note the follow-up session at 4:20 which will demonstrate how ARA is integrated into JUCE.


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Stefan Gretscher

Stefan Gretscher

Software Developer, Celemony
Stefan's career in audio programming has led him from hand-crafting bare-bones assembler on the DSP-based platforms of the late 90s to working on today's Melodyne with its roughly 250k lines of just the audio model and processing C++ code. Along that path, his focus shifted from signal... Read More →


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Announcing SoundStacks' New Cmajor Platform
The SoundStacks team will be announcing and demonstrating their new Cmajor platform. Cmajor is our new language and platform for audio development, offering great performance and easy development for both beginners and professional DSP programmers. Join us for the great reveal!
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Julian Storer

Julian Storer

CEO, Sound Stacks Ltd
Jules is a developer and founder who has created several audio technologies and companies in his 20+ year career. He's best known for creating JUCE and Tracktion, and is currently CEO of Sound Stacks Ltd.
avatar for Cesare Ferrari

Cesare Ferrari

CTO, Sound Stacks Ltd
avatar for Lucas Thompson

Lucas Thompson

Senior Software Engineer, Sound Stacks Ltd
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Real-Time Audio Programming on Apple Silicon
Apple Silicon has provided a major increase in the raw signal processing capabilities available to developers and users. In this talk, we'll cover topics such as:
- writing realtime-safe code
- how to configure audio realtime threads to obtain maximum performance
- how to measure performance meaningfully
- memory reordering differences to be aware of when porting x86 code
- out-of-process Audio Unit hosting, for compatibility with plug-ins with different architectures from the host, as well as for the stability of host applications


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
DW

Doug Wyatt

Audio API Architect, Apple


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Live interview with Niklas Odelholm, VP, Softube
Bobby Lombardi (PACE and ADC Chair) will interview Niklas, one of the original four founders of Softube, and currently VP of products. We'll hear about Niklas' history with music and computer technology, and how that lead him to meet the other founders of Softube. We'll cover the early years from a small start-up navigating the evolving technologies and getting their modeling technology into the hands of third-party partners, to their eventual move into developing, marketing, and selling products under their own branding, with an impressive and expanding portfolio of officially licensed partners. We take a technical dive into the core fundamentals of component level analog modeling, and the challenges of accurately replicating classic vintage characteristics, dynamics, and sound. We'll also discuss signal processing, the ongoing challenge of staying ahead of the curve with cross-platform and DSP development, and address the importance of end users involvement in product design and optimisation.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Niklas Odelholm

Niklas Odelholm

VP, Softube
Niklas started his career as a signal processing engineer in 2003, but has over the years worked in almost every role at Softube. He is currently VP of Products, working on the big picture (strategy) but is the happiest when he can be creative with algorithms or interface design... Read More →


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

12:50pm GMT

Lunch
Tuesday November 15, 2022 12:50pm - 2:00pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

12:50pm GMT

Socialize, Network & Explore The Virtual Venue
Interact with other attendees, visit our numerous exhibitors and their interactive exhibition booths and take part in a fun puzzle treasure hunt game during breaks in our scheduled content! Have you visited the cloud lounge yet?

Tuesday November 15, 2022 12:50pm - 2:00pm GMT
Gather Town

1:05pm GMT

ADC Online Booth Tour
Join our ADC Online host Oisin Lunny for a guided tour of the ADC22 virtual venue on Gather,

Please meet at the ADC22 Gather central meeting point (by the large ADC22 logo in front of the Apple exhibit booth).

Speakers
avatar for Oisin Lunny

Oisin Lunny

Co-founder, Galaxy of OM, S.L.
Oisin Lunny is an award-winning marketer, webinar and podcast host, MC, public speaker, virtual event consultant, UX business professor, and journalist. His work has been translated into Chinese and Arabic, and read over half a million times as a senior contributor to Forbes, mus... Read More →


Tuesday November 15, 2022 1:05pm - 1:30pm GMT
Gather Town

1:35pm GMT

ADC Online Booth Tour
Join our ADC Online host Oisin Lunny for a guided tour of the ADC22 virtual venue on Gather,

Please meet at the ADC22 Gather central meeting point (by the large ADC22 logo in front of the Apple exhibit booth).

Speakers
avatar for Oisin Lunny

Oisin Lunny

Co-founder, Galaxy of OM, S.L.
Oisin Lunny is an award-winning marketer, webinar and podcast host, MC, public speaker, virtual event consultant, UX business professor, and journalist. His work has been translated into Chinese and Arabic, and read over half a million times as a senior contributor to Forbes, mus... Read More →


Tuesday November 15, 2022 1:35pm - 2:00pm GMT
Gather Town

2:00pm GMT

Case Study: Eliminating C++ Undefined Behavior, Plug-in Contract Violations, and Intel Assumptions in a Legacy Codebase
For large C++ audio-plugin codebases, adding support for a new platform (such as Apple Silicon/ARM) can be a scary, expensive endeavor. One of the biggest causes for alarm is C++ undefined behavior (UB), which is an unfortunate part of many legacy codebases. After a brief review of what undefined behavior (UB) is we will discuss what issues it can cause and why it should be avoided.  We'll also discuss how paying attention to the details of audio plug-in format "contracts", particularly in regards to threading, can simplify the process of supporting new platforms and new DAWs. Finally, we’ll go over the specific cultural and tooling initiatives we used to eliminate bad behavior in our audio plug-in codebase, including how we used static analysis, plug-in validators, and clang runtime sanitizers to identify and address issues. We hope attendees leave the session with actionable ideas for how to address these sorts of issues in their own codebase.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

How does Autotune work, really?
Autotune has captivated music producers and consumers for over 20 years, but what is the technical wizardry behind its success? We will take a deep dive into the original Autotune patent to learn more.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Xavier Riley

Xavier Riley

Researcher, Queen Mary University of London
I'm currently a PhD student in the Artificial Intelligence and Music (AIM) program at Queen Mary University of London. My research interests are wide ranging! At the moment they focus on pitch tracking/perception and automatic music transcription.Prior to the PhD I worked in web development... Read More →


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

PANEL: Tabs or Spaces?
A group of opinionated expert programmers will argue over the right and wrong answers to a selection of programming questions which have no right or wrong answers.

We'll aim to cover a wide range of topics such as: use of locks, exceptions, polymorphism, microservices, OOP, functional paradigms, open and closed source, repository methodologies, languages, textual style and tooling.

The aim of the session is to demonstrate that there is often no clear-cut best-practice for many development topics, and to set an example of how to examine problems from multiple viewpoints.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Julian Storer

Julian Storer

CEO, Sound Stacks Ltd
Jules is a developer and founder who has created several audio technologies and companies in his 20+ year career. He's best known for creating JUCE and Tracktion, and is currently CEO of Sound Stacks Ltd.
avatar for Dave Rowland

Dave Rowland

CTO, Tracktion
Dave Rowland is the CTO at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio applications utilising JUCE... Read More →


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Anatomy of a Bare Metal Synth
This talk is aimed at any embedded-curious audio software developers who have primarily done native software development, but are interested in what goes into building standalone music gizmos like digital synthesizers, guitar pedals, or other noisemakers. Using the Daisy platform as context, we will discuss a number of fundamental bare metal concepts such as serial communication protocols (MIDI!), direct memory access, serial audio interfaces, and general purpose input/output.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Jack Campbell

Jack Campbell

Senior Software Engineer, Universal Audio
I have had a lot of fun writing audio software for the last five years. I mostly write audio stuff for Universal Audio native plugins these days, but I'm excited to share what I've learned about embedded audio programming via the deep dives I've done on the side. (And for UA in a... Read More →


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

C++ 20 concepts in the wild: Refactoring and open sourcing VCTR, a foundation of the sonible codebase
Containers with continuous storage are basic abstract building blocks for many applications. The C++ 20 standard library offers us two popular ones which are std::vector and std::array along with std::span as a view to continuous data stored anywhere. While they are great at storing data, we often missed features like - Functionality to transform the data stored in them - Performing mathematical operations on numerical vectors with an expressive syntax – at best using SIMD accelerated implementations in the background if available - Copying data without having to bother about heap allocation on the audio thread - Being able to perform all operations mentioned above in a constexpr context

VCTR is our in-house solution to all these challenges. It is a set of wrapper classes around the three standard library classes and adds a bunch of functionalities to them. Depending on the actual template type it enables or disables features and member functions and adds some handy constructors. A Vector of unique ptrs will have other functions as a vector of numerical values. A vector of complex numbers will have other functions than a vector of real-valued numbers. A mathematical operation might use a different implementation strategy depending on the platform it’s built for. And all that wrapped behind a beautiful intuitive API.

What evolved into a somewhat cumbersome bunch of SFINAE constructs over the last few years is now completely rewritten using C++ 20 concepts resulting in a much cleaner implementation that will be open sourced after ADC. The talk will show you how we use concepts, smart expression templates and other template trickery to create a powerful, yet easy to use class and will present you a few core features of the class.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Janos Buttgereit

Janos Buttgereit

Software Developer, sonible
I am a passionate audio software developer, as I love both the technical side of crafting the code behind a plugin and actually using the product to create good sounding music. Working at Sonible, I like focusing on the more abstract and low level aspects of our codebase and I love... Read More →


Tuesday November 15, 2022 3:00pm - 3:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

Develop, Debug and Deploy: MIDI 2.0 Prototyping and Tools
MIDI 2.0 extends MIDI in many ways: more channels, higher resolution, jitter reduction, auto-configuration via bidirectional transactions.

Core members of the MIDI Association present an overview of the available tools for developing, debugging, and deploying MIDI 2.0 products.

A number of tools have been developed to jump-start prototyping and validation of UMP functions and fuel the transition to MIDI 2.0. These tools include software applications for implementing and debugging UMP software and hardware, and testing MIDI-CI implementations.

All tools will be shown in action and basic usage will be explained.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Florian Bömers

Florian Bömers

Founder, Bome Software GmbH & Co. KG
Florian Bömers is an enthusiastic amateur musician and started programming audio and MIDI applications already in his childhood. Now he manages his company Bome Software, which creates standard software and hardware solutions for MIDI translation and MIDI networking. In the MIDI... Read More →


Tuesday November 15, 2022 3:00pm - 3:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

Real-time interactive synthesis with ML: differentiable DSP in a plugin
2022 has been an exciting year for machine learning (ML) and realtime neural audio synthesis. We share how we achieved real-time differentiable DSP (DDSP), in an expressive and transcultural instrument, operating with low latency in DAWs and mobile devices worldwide. Our talk will cover:
  • building an intuition for DDSP and the capabilities of an audio machine learning system,
  • what problems we faced and how we achieved real-time DDSP in both pro-audio and consumer applications,
  • how we facilitated creative musical expression in our ML system,
  • how we approached testing, and
  • how we see realtime ML audio processing in future.


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers

Tuesday November 15, 2022 3:00pm - 3:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

Combining serverless functions with audio VST plugins
VST plugins allow us to handle virtual instruments and alter audio signals to create impressive effects in our audio projects. Once the user finds the perfect parameters to sound just the way he wants, the VST allows the user to save these parameters locally in XML files.

But what if the user wants the same settings on another device, study, or share them? We can replicate them, but why not save these parameters in the cloud and have our collection always available. This workshop will review how to create a serverless project with AWS Amplify, GraphQL and Cognito user authentication for creating our VST parameter library in the cloud.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers

Tuesday November 15, 2022 3:00pm - 3:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

3:50pm GMT

Break
Tuesday November 15, 2022 3:50pm - 4:20pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

AI Driven Content Creation: Bringing Voice Synthesis to Audio Applications
AI voice synthesis technology is making strides, bringing realistic voices to our phones, cars, and store kiosks. At Supertone, we focus on researching and developing inspirational voice synthesis for content creators in music, film, and games. In this talk, we will cover the following topics:
  • Recent research trends in AI voice synthesis
  • Examples of how Supertone applies voice synthesis tech to media production
  • Possible forms that AI voice synthesis can take as audio applications
  • Challenges and solutions unique to developing audio applications using neural networks
  • Demonstration of real-time AI voice applications including speech enhancement and voice conversion

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Sky Yoo

Sky Yoo

Product Manager, Saekyul Yoo
Sky is a Product Manager at Supertone, Inc. As the project lead for GOYO Voice Separator, he works with Supertone's researchers, developers, and designers to create real-time audio plug-ins using neural network models. Previously a developer, Sky has also created prototype plug-ins... Read More →
avatar for ChangHun Sung

ChangHun Sung

Software Engineer, Supertone
Chang Hun is a Software Engineer at Supertone, Inc. He previously worked in the game industry and used to develop game engines. Chang Hun now develops high performance C++ frameworks to accelerate the process of productizing ML models. He is also the principal clarinetist in an amateur... Read More →


Tuesday November 15, 2022 4:20pm - 4:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Automating Audio Device Testing - A brief existential guide
This will be a whistlestop tour of how automated audio interface testing began at Focusrite.  Be prepared for big questions, big numbers, and a chance to win some Focusrite T-shirts.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Joseph Deller

Joseph Deller

Software Developer, Focusrite
Joe has been recording music and playing in bands since he was a teenager, starting with a valve reel-to-reel recorder.  He started coding on a ZX81, worked for software companies large and small, and ran a recording studio in Oxford.  Joe combines his love of electronics, music... Read More →
avatar for Jake Wignall

Jake Wignall

Software Developer, Focusrite
After being fascinated by the equipment in studios he visited with his teenage band, Jake decided to pursue a degree in music technology where he was first introduced to the world of programming. He managed to get a placement with Focusrite's QA Team and has worked as a Software Developer... Read More →


Tuesday November 15, 2022 4:20pm - 4:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Building ARA Plug-ins with JUCE
Audio Random Access (ARA) gives plug-in developers a new API through which they can exchange information about the current project with the DAW. This includes being able to access audio source data and playback region time information in its entirety, without being restricted to the current playback buffer.

This talk will focus on accessing these features through the JUCE-provided ARA API wrapper. We will guide you through enabling ARA in JUCE, building the AudioPluginHost with ARA hosting enabled, and configuring a JUCE plug-in project to provide access to ARA-related capabilities. Using a simple plug-in project, we will demonstrate some of the unique features that ARA can provide.

If you’re interested in this talk, you may find it useful to also attend Celemony’s more general introduction of ARA at 12:20pm.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Attila Szarvas

Attila Szarvas

C++ Software Engineer, JUCE
I studied electrical engineering and got drawn into signal processing and software development while working on active noise cancelling research topics. I've been working ever since as a programmer in various fields, but the most fun I had was doing audio plugin development in the... Read More →
avatar for Stefan Gretscher

Stefan Gretscher

Software Developer, Celemony
Stefan's career in audio programming has led him from hand-crafting bare-bones assembler on the DSP-based platforms of the late 90s to working on today's Melodyne with its roughly 250k lines of just the audio model and processing C++ code. Along that path, his focus shifted from signal... Read More →


Tuesday November 15, 2022 4:20pm - 4:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Real-time audio source separation on iOS devices
Source separation performance has seen tremendous progress in recent years thanks to advances in deep learning technologies. Its real-time setup would give musicians new ways to interact with music, e.g. in djing and music learning. In this talk, we will cover the challenges we had to tackle in order to provide our users with real-time low-latency source separation on iOS mobile devices.
We will talk about:
- state-of-the-art deep learning-based source separation algorithms
- model optimisation for iOS mobile platforms (Core ML and ANE)
- low-level integration for real-time use-cases


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Adrien Ferrigno

Adrien Ferrigno

Audio Developer, MWM
Audio developer, computer scientist, researcher, iOS developer and musician with a crush on mix/mastering. Adrien joined the MWM adventure in 2015 and has worked on various audio projects since (edjing, Guitar Tuner, Stemz…). He has been in multiple bands, produced his own tracks... Read More →
avatar for Clément Tabary

Clément Tabary

ML Engineer, MWM
Clement is a deep learning research engineer at MWM. He applies ML algorithms to a wide range of multimedia fields, from music information retrieval to image generation. He's currently working on audio source separation, music transcription, and automatic DJing.


Tuesday November 15, 2022 4:20pm - 4:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

5:00pm GMT

KEYNOTE: The Musical Instruments of Star Trek
In the futuristic universe of Star Trek there are a lot of musical instruments, and many of them
using far-future technology. The designers of these instruments never intended them to actually
work, and were therefore led by their imaginations and not by the limitations of earthly
technology – the opposite of the instrument design process today, where the design process
tends to be heavily influenced by the affordances of the technology we have to hand.
In this talk music technology researcher and theorist Astrid Bin explains how she explored this
imagination-first process of instrument design by recreating an instrument, as faithfully as
possible, from the show. Through the process – from discovering the instrument, to getting input
from the show's original production designer, to figuring out how to make the instrument's
behaviour true to the original intentions (but using primitive 21st century embedded sensors and
computers) – she describes what she learned about designing real digital musical instruments
through trying to recreate an imaginary one.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Astrid Bin

Astrid Bin

Ableton AG
Astrid Bin is a music technology researcher and theorist based at Ableton in Berlin. She is also a founding developer of Bela.io, the platform for creating beautiful interaction. She spends her time writing, playing drums, making instruments, and trying to make what is perfect more... Read More →


Tuesday November 15, 2022 5:00pm - 6:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

6:00pm GMT

Cloud Lounge Social Mixer
Visit the Cloud Lounge for an informal social mixer before the start of the rebroadcast schedule. Hang out, play games with other attendees and catch up with old friends.



Tuesday November 15, 2022 6:00pm - 6:30pm GMT
Gather Town

6:00pm GMT

Evening Meal & Networking
Tuesday November 15, 2022 6:00pm - 7:30pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

6:00pm GMT

Women In Audio Reception
Tuesday November 15, 2022 6:00pm - 7:30pm GMT
South Place Hotel

6:30pm GMT

Open Mic Night (Online)
The ADC Open Mic Night comes to Gather for our online conference attendees! A fun, informal online event with lightning talks, music performances, and some impromptu standup comedy.

If you are attending the ADC online, you can contribute to the online Open Mic night with a 5 minute talk or performance! Please use the sign up form here

This is an event exclusively for our online attendees. It won't be recorded, published, or streamed.

Speakers
avatar for Oisin Lunny

Oisin Lunny

Co-founder, Galaxy of OM, S.L.
Oisin Lunny is an award-winning marketer, webinar and podcast host, MC, public speaker, virtual event consultant, UX business professor, and journalist. His work has been translated into Chinese and Arabic, and read over half a million times as a senior contributor to Forbes, mus... Read More →


Tuesday November 15, 2022 6:30pm - 8:00pm GMT
Gather Town

7:30pm GMT

The ADC Quiz
Join us and test your knowledge of music, lyrics, random facts and more at the ADC Quiz!
Bring your friends, or meet new ones as you work in teams to win incredible prizes.

Speakers
avatar for Derek Heimlich

Derek Heimlich

Director of Sales - Pro Audio, PACE Anti-Piracy, Inc.
I work with software developers to help protect and monetize their software and plugins. Come talk to me to learn more!


Tuesday November 15, 2022 7:30pm - 9:00pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

9:00pm GMT

Networking
Tuesday November 15, 2022 9:00pm - 10:00pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK
 
Wednesday, November 16
 

8:30am GMT

Breakfast
Wednesday November 16, 2022 8:30am - 9:00am GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Apple, Google and Microsoft Implementations of MIDI 2.0
Engineers from Apple, Google, and Microsoft will present the current state of MIDI 2.0 implementations in their operating systems. We’ll describe the API changes required for MIDI 2.0 for each platform as well as discuss the philosophy and reasoning behind various design decisions. We’ll also present the status of transports, such as USB and Ethernet. If you’re a developer who is interested in the practical implementations of MIDI 2.0, this is the session for you.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Pete Brown

Pete Brown

Principal Software Engineer, Windows, Microsoft
Pete works in the Windows + Devices org in Microsoft, primarily focusing on partners, apps, and technology for musicians. He's the lead for the Windows MIDI Services project which is bringing an updated MIDI stack to Windows, and adding full MIDI 2.0 support. He's also serves as the... Read More →
avatar for Phil Burk

Phil Burk

Staff Software Engineer, Google Inc
Music and audio software developer. Interested in compositional tools and techniques, synthesis, and real-time performance on Android. Worked on HMSL, JForth, 3DO, PortAudio, JSyn, WebDrum, ListenUp, Sony PS3, Syntona, ME3000, Android MIDI, AAudio, Oboe and MIDI 2.0.
avatar for Torrey Holbrook Walker

Torrey Holbrook Walker

Audio/MIDI Framework Engineer, Apple Inc.
I am a senior software framework engineer on the Core Audio team at Apple and a frequent MIDI specification contributor and prototyper with the MIDI Association. I have been passionate about creating music production technologies that delight audio software developers, musicians... Read More →
avatar for Mike Kent

Mike Kent

Chief Strategy Officer, AmeNote Inc.
Mike Kent is the Co-Founder and Chief Strategy Officer of AmeNote Inc. Mike is a world leader in technology for musical instruments and professional audio/video. Mike is the Chair of the MIDI 2.0 Working Group of the MIDI Association. He is a co-author of USB MIDI 1.0, the principal... Read More →


Wednesday November 16, 2022 9:00am - 9:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Every Beat Counts - Tempo Sync 101
Audio plug-in effects and instruments are sometimes required to be musical but...
Digital audio uses audio samples at a specific rate.

Music, on the other hand, uses time signatures, subdivisions and tempo.
How do those two "worlds" work together?

What we'll explore in this talk:
  • Basic terms used in digital music.
  • Musical vs "clock" time.
  • Real-world example in an open-source JUCE plug-in.
  • Pitfalls & caveats when implementing tempo-sync.
  • While the talk is suitable for beginner audio developers, Some basic musical knowledge is recommended.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Tal Aviram

Tal Aviram

Software Engineer, Sound Radix
Hi, I'm Tal. an audio developer at Sound Radix working with some crazy-science fellows. I've been a music programmer/keyboardist for almost a decade. So I don't see myself as proper 'developer' nor 'musician'. My main motivation for audio developing is make frustration-less... Read More →


Wednesday November 16, 2022 9:00am - 9:50am GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Four Ways To Write A Pitch-Shifter
We look at some approaches to pitch-shifting music, and the related problem of time-stretching, with intuitive visual explanations, code and audio examples.  We start with a simple overlap-add approach, explore the mechanics of FFT-based effects and frequency-domain approaches, and finish with the design used in a new open-source polyphonic pitch/time C++ library.


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Geraint Luff

Geraint Luff

Signalsmith Audio Ltd.
Geraint grew up with a strong interest in music, maths and programming. He now heads up Signalsmith Audio, a small company which provides custom audio/DSP algorithm design and implementation, as well as developing their own line of audio plugins.


Wednesday November 16, 2022 9:00am - 9:50am GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Spatial Audio for Live: Why and How?
Stereo has been the status quo for amplified live events for decades. While loudspeakers, their amplifiers and the mixing consoles feeding them have drastically improved during this time, the fundamental concept of mixing to a stereo bus has remained the same.

In this talk, we will first introduce the benefits of spatial audio for live events (from small to large scale) and look at the object-based mixing tools available today.

We will then dive deeper into the most interesting aspects of developing such a product:
  • Going from prototype to a finished product, choosing the right platform for the job and managing automated testing.
  • Developing and testing audio algorithms, such as a 3D panning algorithm, a 3D Parametric Reverb or a binaural renderer; and making sure those algorithm work at scale.
  • Optimizing real-time code on a real-time Linux kernel to always squeeze more and more DSP.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Olivier Petit

Olivier Petit

Head of Creative Software, L-Acoustics
After an MSc in Integrated Circuit design, I have joined the Creative Technologies department of L-Acoustics in 2018 as a C++ software engineer.  I have been taking an active part in developing innovative technologies to bring immersive audio to live performances, striving to better... Read More →
avatar for Frederic Roskam

Frederic Roskam

Head of Immersive Audio, L-Acoustics


Wednesday November 16, 2022 9:00am - 9:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Jumpstart Guide To Deep Learning In Audio For Absolute Beginners: From No Experience And No Datasets To A Deployed Model
Deep learning is becoming more and more significant in all areas of science including audio processing. Yet a lot of people have a hard time understanding it or are too scared to start learning it altogether. Is deep learning really so difficult that you need a PhD to use it? Does it truly require huge datasets and gigantic computational clusters? Is it possible to deploy neural networks in real-time audio plugins? In this talk, I will show you how you can learn deep learning for Virtual Analog modeling of audio effects fast, for free, without a PhD, without any special equipment or loads of data, and deploy your deep learning model in an audio plugin.

What you will learn:
  • what are the 4 biggest myths concerning deep learning?
  • how to learn deep learning for audio fast in 4 simple steps for free
  • where to find and how to synthesize a dataset to model your analog device of choice
  • how to train your first deep learning model for audio using the basics of PyTorch and without a computational cluster
  • how to deploy your model in a real-time audio plugin

The presentation will feature a live demo of setting up a deep learning pipeline and training a neural network for Virtual Analog modeling of a distortion effect.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Jan Wilczek

Jan Wilczek

Audio Developer, Loudly GmbH
I am an Audio Developer of Music Maker JAM at Loudly GmbH in Berlin; an app to make loop-based music that runs on Android, iOS, and Windows. Additionally, I have created https://TheWolfSound.com to help students and software engineers learn audio programming for getting game audio, audio plugin, or mobile audio developer jobs. I am also a researcher in the area of Virtual Analog modeling using deep learning; my latest work was presented at the Digital Audio FX 2022 conference... Read More →


Wednesday November 16, 2022 10:00am - 10:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Optimising a Real-time Audio Processing Library
This talk will take you through optimising a codebase intended for real-time use from the most practical perspective. Filled with real-world examples and tales of success and failure this should give attendees the tools and knowledge to approach optimising their own code in a pragmatic and confident way.

First, we’ll cover how to actually measure and compare performance across different platforms, the tools to do this and most importantly how to do this continuously over time with CI. Next, we’ll look at the various strategies for identifying areas for optimisation and how these relate to real-world use cases. We’ll look at tradeoffs between CPU and memory and the environments these may have the most influence over.

Finally, we’ll look at some useful tricks and lesser known strategies and where sometimes what you’ve been taught doesn’t actually lead to the best results.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Dave Rowland

Dave Rowland

CTO, Tracktion
Dave Rowland is the CTO at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio applications utilising JUCE... Read More →


Wednesday November 16, 2022 10:00am - 10:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

10:00am GMT

WebAudio Modules 2.0: audio plugins for the Web Platform!
Web Audio Modules 2.0 (WAM) is the latest version of an open source audio plugin standard for the web platform, developed since 2015 by a group of academic researchers and developers from the computer music industry. Version 2.0 enables the development of audio effects, instruments, and MIDI controllers as plugins and compatible hosts and takes into account recent evolution in the development of web technologies. Indeed, since 2018 W3C Web standards have matured: the appearance of WebAssembly, stabilization of WebComponents, support for AudioWorklets [1] in the Web Audio API, and continued evolution of JavaScript have all helped make professional-grade, Web-based audio production a reality. In addition, commercial companies now offer digital audio workstations (DAW) on the Web which act as host Web applications and support plugins [2] (including WAM ones). Taking into account these developments and the feedback received from developers over the past few years, we released “Web Audio Modules 2.0” (WAM2), an open source SDK and API distributed as four GitHub repositories (https://github.com/webaudiomodules) and as npm modules (MIT License, see https://www.npmjs.com/search?q=keywords:webaudiomodules). WAM2 now supports parameter automation, plugin groups, audio thread isolation, midi events, plugin/host extended communication. WAM2 is Web-aware: plugins can be loaded and instantiated by hosts using a simple URI using dynamic imports.

One of the repository, wam-examples, comes with more than 20 examples of plugins, written using different languages and building chain. It can be tried online here: https://mainline.i3s.unice.fr/wam2/packages/_/
WebAudio Modules 2.0 comes also with more extended examples such as a guitar effect pedalboard plugin ( https://wam-bank.herokuapp.com/), an open source DAW prototype (https://wam-openstudio.vidalmazuy.fr/), a collaborative sequencer (sort of Ableton Live meets Google Docs) entirely developed with WAM2 (https://sequencer.party/), which comes with more than 20 open source WAM2 plugins.

Furthermore, the FAUST online IDE (https://faustide.grame.fr/) can now compile FAUST code into a WAM2 plugin, including GUIs and online publication for reuse in any compatible host (tutorial here: https://docs.google.com/document/d/1HDEm4m_cD47YBuDilzGYiANYQDktj56Njyv0umGYO6o/edit?usp=sharing-)

In this talk, we propose to present the WAM2 proposal, illustrated by many interactive demonstrations of plugins and host, including open source and commercial ones.

[1] H. Choi. Audioworklet: the Future of Web Audio. International Computer Music Conference ICMC 2018.
[2] M. Buffa, J. Lebrun, S. Ren, S. Letz, Y. Orlarey, and al.. Emerging W3C APIs opened up commercial opportunities for computer music applications. The Web Conference 2020 - DevTrack, Apr 2020, Taipei.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Michel Buffa

Michel Buffa

Professor, Université Côte d'Azur
Michel Buffa is a professor/researcher at University Côte d'Azur, a member of the WIMMICS research group, common to INRIA and to the I3S Laboratory (CNRS). He contributed to the development of the WebAudio research field, since he participated in all WebAudio Conferences, being part... Read More →


Wednesday November 16, 2022 10:00am - 10:50am GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Parameter Inference of Music Synthesizers with Deep Learning
Synthesizers are crucial for designing sounds in today's music. However, to create the desired sound texture by tuning the right synthesizer parameters, one requires years of training and in-depth domain experience on sound design. Music producers might also search through preset banks, but it takes extensive time and effort to find the best preset that gives the desired texture.

Imagine a program that you can drop your desired audio sample, and it automatically generates the synthesizer preset that could recreate the sound. This task is commonly known as "parameter inference" of music synthesizers, which could be a useful tool for sound design. In this talk, we will discuss how deep learning techniques can be used towards solving this task. We will cover recent works that use deep learning to perform parameter inference on a variety of synthesizers (FM, wavetable, etc.), as well as the challenges that were faced in solving this task.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Hao Hao Tan

Hao Hao Tan

Software Engineer, BandLab


Wednesday November 16, 2022 10:00am - 10:50am GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

10:50am GMT

Break
Wednesday November 16, 2022 10:50am - 11:20am GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Fast, High-quality Pseudo-random Numbers for Audio Developers
Many C++ developers reach for std::rand() the first time they need a pseudo-random number. Later they may learn of its downsides and include <random> to use the Mersenne Twister (std::mt19937) and various distributions from the standard library. For some the journey ends here, but for others questions arise: How should I properly seed my generators? How should I approach portability? Does std::mt19937 failing some statistical tests matter to me? Am I leaving performance on the table using std::mt19937? What quality do I need for my use-case and how can I get the best deterministic performance for that quality?

After a brief introduction to generating pseudo-random numbers with the C++ standard library this talk will look at answering these questions in digital audio applications; the same learnings could be applied elsewhere such as games, graphics, or some simulations. We will examine some benchmarks and quality analysis of standard library pseudo-random number generators and modern generators outside the standard. We will close with a demonstration of ways to make runtime-performance determinism improvements with minor quality loss over using standard library distributions.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
RM

Roth Michaels

Soundwide


Wednesday November 16, 2022 11:20am - 12:10pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

11:20am GMT

PANEL: Accessibility: Will You Be Next?
This session, hosted by music producer, audio engineer and accessibility consultant Jason
Dasent, brings together key players from across the music industry, all with a
passion for accessibility. The event will take the form of a discussion panel
focusing on the current state of accessibility as it relates to music technology, as well as how
we can work together to take it to the next level.

Topics that will be covered include: The advancements made by several music equipment
manufacturers in the last 2 years; how to inspire other music equipment manufacturers to
make their products and services accessible; marketing opportunities for companies that
make accessible products, and how we bridge the gap between able-bodied and
differently abled music industry practitioners, leading to more collaborations and
employment opportunities for professional differently abled practitioners.
The event will culminate in a 20-minute performance, showing the latest in accessible music
tech from keyboards to groove stations, to a fully accessible mixing system.

Throughout the conference, attendees are encouraged to visit the “Will You Be Next?”
Accessibility Zone, where they can meet software engineers and managers from a variety of
companies who are all already involved in accessible music tech. Visitors to the Accessibility Zone will be
invited to get hands-on with all the accessible equipment that will be on display. Attendees
will also be able to experience music production from recording to mastering, all with accessible equipment.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Jason Dasent

Jason Dasent

Owner & CEO, Studio Jay Recording
Jason Dasent has over 25 years’ experience in all aspects of recording and music production. Jason launched Studio Jay Recording in Trinidad in 2000 catering to both the Advertising Sector and Artist Production for many top Caribbean recording artists. He has done Music Scores... Read More →
avatar for Quintin Balsdon

Quintin Balsdon

Software Engineer - Accessibility, Spotify
Quintin has been an Android developer since 2011 and works for Spotify in the accessibility team. He is currently focusing on projects to make accessibility part of the standard development process by creating custom developer tools and apps, like TalkBack for developers, which allows... Read More →
avatar for Mary-Alice Stack

Mary-Alice Stack

Chief Executive, Creative United
JC

James Cunningham

Queen's University Belfast
avatar for Grace Capaldi

Grace Capaldi

Director, Grinning Dog Records/Echotown Studio
Grace is new to the industry and she is enjoying the ride. She co-runs a record label and accessible recording studio in Dorset with her husband and sister. The studio was a complete new build and Grace co-project managed every step of the process, being a permanent wheelchair user... Read More →
avatar for Harry Morley

Harry Morley

Software Developer, Focusrite
Harry has been a software developer at Focusrite for 3 years. He mainly works on C++ software that interacts with audio hardware, such as the Vocaster and Scarlett interfaces.Harry loves talking all things accessibility (he played a part in making Vocaster screen reader-accessible... Read More →


Wednesday November 16, 2022 11:20am - 12:10pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Pipeline for VA Modelling with Physics-Informed Machine Learning
Wave Digital Filters and neural networks are two popular solutions for circuit modelling. In this presentation, we demonstrate a pipeline that makes use of our Differentiable Wave Digital Filters library. A dataset was collected from a diode clipper circuit and, with the library, was used to train a real-time deployable model. The trained model has higher accuracy and similar computation time when compared to traditional white-box models. We present this methodology to demonstrate the objective qualities and advantages of this approach.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
CJ

Christopher Johann Clarke

PhD Candidate, Christopher Clarke
Christopher Clarke is a PhD candidate studying at SUTD (Multiphysics, A.I/Machine Learning) with a background in music specialising in generative algorithms(MMus) and psychophysics in music(BA, recipient of Phillip Holt Award). 


Wednesday November 16, 2022 11:20am - 12:10pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Point to Point Modeling: An Automatic Circuit Solving Library
The "Point to Point Modeling" library is a software tool for simulating analog circuits. It is intended for audio signal processing applications, such as real-time plugins and mobile apps. In the library, component-level circuit analysis is automated, allowing for arbitrary circuits to be easily implemented based on resistors, capacitors, potentiometers, op-amps, diodes, transistors, and tubes. In addition to solving programmer-specified circuits, the library also includes 150+ pre-made circuits common in audio EQs, consoles, effect pedals, guitar amps, and more. Implementations of the library are available in C++ for commercial software as well as in MATLAB for rapid prototyping. An overview of the library's API for developers will be provided, along with some examples.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev
Code repositories discussed in presentation:
https://github.com/HackAudio/PointToPoint_LT
https://github.com/HackAudio/PointToPoint_MATLAB1

Speakers
avatar for Eric Tarr

Eric Tarr

Dr. Eric Tarr teaches classes on digital audio, computer programming, signal processing and analysis at Belmont University. He received a Ph.D., M.S., and B.S. in Electrical and Computer Engineering from the Ohio State University. He received a B.A in Mathematics and a minor in Music... Read More →


Wednesday November 16, 2022 11:20am - 12:10pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Ask a Lawyer
Do you have a legal problem or question relating to your audio products? Want to discuss different ways to approach licensing and strategic partnerships? The new privacy regulations? Points to consider if you’re a UK or EU company doing business in the U.S. or vice versa? Or perhaps you have questions about protecting your intellectual property? A team of expert lawyers including Heather Rafter (RafterMarsh), Philipp Lengeling (RafterMarsh) and Francine Godrich (Focusrite) will be available to answer your questions.  Bring us your best legal questions!

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Heather Rafter

Heather Rafter

Principal, RafterMarsh
Heather Dembert Rafter has been providing legal and business development services to the audio, music technology, and digital media industries for over twenty-five years. As principal counsel at RafterMarsh US, she leads the RM team in providing sophisticated corporate and IP advice... Read More →
avatar for Francine Godrich

Francine Godrich

General Counsel, Focusrite
Francine is General Counsel (i.e. does lots of legal stuff) and Company Secretary (i.e. deals with corporate governance administration) to one of the world’s most passionate and prestigious Music Tech plcs. Francine spends a lot of her time looking at how Focusrite plc can grow... Read More →
avatar for Philipp Lengeling

Philipp Lengeling

Senior Counsel, RafterMarsh Law
Philipp G. Lengeling, Mag. iur., LL.M. (New York), Esq. is an attorney based in New York (U.S.A.) and Hamburg (Germany), who is heading the New York and Hamburg based offices for RafterMarsh, a transatlantic boutique law firm (California, New York, U.K., Germany) that specializes... Read More →


Wednesday November 16, 2022 12:20pm - 12:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Building Tomorrow’s Audio Workflows with Pro Tools SDKs
Over the years, many software developers have figured out ingenuous ways to integrate with the Pro Tools environment to deliver some of today’s most compelling workflow-enhancing tools, relied on by audio professionals the world over. But these necessary workflows far too often came at the cost of performance and reliability. Recognizing the huge potential for innovation driven by third-party audio developers, the Pro Tools team embarked on a journey to create and deliver a wide spectrum of APIs to facilitate the creation of tomorrow’s audio and music workflows. In this session, we will give ADC 2022 attendees an early look at upcoming Pro Tools APIs, allowing for scripting and automation of the Pro Tools application, as well as present concepts of additional SDKs in the works (Satellite SDK, Clip FX SDK, Session SDK and more). The Avid audio team is committed to a new era of openness, and we hope this session will drive much needed conversation around the needs of the community for deeper integration with Pro Tools.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Francois Quereuil

Francois Quereuil

Director, Audio Product Management, Avid


Wednesday November 16, 2022 12:20pm - 12:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Enabling unique sound experiences through user-centred design
How can you create real value for your users while streamlining design efforts and ensuring a consistent design process which is synchronized with Development?

When designing hard- and software, the end user should always be at the center of one's attention. What are their needs and pains, and where can our product truly generate value for them? As a brand, you want to be recognizable, stand out from the crowd, and convey your brand values consistently over all touchpoints. On top of this, what you offer must work and perform flawlessly while seamlessly integrating into your customers' workflow, which means that design and development efforts must go hand in hand. This can only be achieved if the design efforts of your company are actively managed.

Vanessa and Alexander will talk about how they address these challenges as Designers and Audio Engineers with hands-on examples they experienced while working for Notation Creative, a Zurich-based Brand and Product Design Consultancy that originated from the German Audio company Sennheiser and is now strongly focusing on developing digital and physical audio solutions for various clients around the globe.
Notation Creative

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Alexander Häberlin

Alexander Häberlin

Senior Design Manager, Notation Creative
As the former Head of Global Brand Space Design at Sennheiser and one of the founding members ofNotation Creative Consulting, Alexander built a broad experience in various brand and designdisciplines, including strategy, business model innovation, retail and Industrial/User Experiencedesign... Read More →
avatar for Vanessa Barrera

Vanessa Barrera

Sound Engineer & UX Designer, Notation Creative
After successfully completing her studies in sound engineering in Bogota, Colombia, Vanessa joinedSonova in Switzerland, where her work was focused on R&D for hearing solutions. To expand her horizon in the field of User Experience Design, Vanessa joined Notation Creative in 2022... Read More →


Wednesday November 16, 2022 12:20pm - 12:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Interactive Ear Training using Web Audio
This talk aims at showing how we can use web technologies like the Web Audio API and Canvas API to bring gamification in the field of web-based audio tutorials. By using the Web Audio API native audio nodes, we will show how we can build a simple, yet fully interactive ear training application for educational purposes.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Fabrice Dos Santos

Fabrice Dos Santos

Web Engineering Manager, Slate Digital
Fabrice is a Web Engineering Manager at Slate Digital. His team of Web Developers aims at providing online services in addition to Slate Digital plugins and they are deeply interested in Web Audio as a complementary field of expertise for music software development. Fabrice has previoulsy... Read More →
avatar for Charlène Queffelec

Charlène Queffelec

Senior UX Designer, Slate Digital
Charlene is a senior UX designer with extensive experience in education and gaming app design projects.She works on plugin design and user workflow with both product and web engineering teams at Slate Digital. She enjoys using gamification strategies to facilitate usage in even the... Read More →



Wednesday November 16, 2022 12:20pm - 12:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

12:50pm GMT

Lunch
Wednesday November 16, 2022 12:50pm - 2:00pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

12:50pm GMT

Women in Audio Working Lunch

Wednesday November 16, 2022 12:50pm - 2:00pm GMT
CAPSLOCK

12:50pm GMT

Socialize, Network & Explore The Virtual Venue
Interact with other attendees, visit our numerous exhibitors and their interactive exhibition booths and take part in a fun puzzle treasure hunt game during breaks in our scheduled content! Have you visited the cloud lounge yet?

Wednesday November 16, 2022 12:50pm - 2:00pm GMT
Gather Town

1:05pm GMT

ADC Online Booth Tour
Join our ADC Online host Oisin Lunny for a guided tour of the ADC22 virtual venue on Gather,

Please meet at the ADC22 Gather central meeting point (by the large ADC22 logo in front of the Apple exhibit booth).

Speakers

Wednesday November 16, 2022 1:05pm - 1:30pm GMT
Gather Town

2:00pm GMT

FM Synthesis explained with integrals
Frequency Modulation synthesis has a long history. From John Chowning's original research, through Yamaha's patent and commercialization, to modern hardware reissues, it has held its mystique and awed millions of musicians.

However, most FM synthesis tutorials only touch the surface of this veritable ocean. There is no introduction to this domain that combines the theoretical background with code implementations optimized for modern hardware. This talk will cover the necessary mathematical ideas alongside performant actualizations. Visualizations will help clarify the concepts shown.

What is the difference between Linear, Exponential, and Through-Zero FM? How is FM related to Phase Modulation (PM), Ring Modulation (RM) and Amplitude Modulation (AM)? Why do digital FM synths sound different if they are all using the same technique? These questions and more will be answered.

Finally, an entirely novel way to conduct FM synthesis will make its exclusive premiere.

Attendees should leave with a better understanding of FM and software synthesizers employing it. In addition, they will learn how to combine conceptual building blocks to build larger systems while staying within system design constraints.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for George Gkountouras

George Gkountouras

Independent Developer, Arthurian Audio
George Gkountouras (MSc ECE) is a software engineer, researcher and entrepreneur in the audio software industry. He believes that AI will enable the creation of state-of-the-art music technology products. He has previously given a talk at ADC about his quantum sequencer application... Read More →


Wednesday November 16, 2022 2:00pm - 2:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Networked low-latency audio in the wild: crossing the LAN boundary
Audio over IP (AoIP) and Audio over Ethernet (AoE) protocols are nowadays robust and often codified into standards like AES67. However, they cannot be used straight away to stream audio outside the boundaries of a Local Area Network (LAN).

While there are multiple products that are capable of establishing low-latency, high-quality audio connections in a Wide Area Network (WAN) such as the Internet, we are still far from having a standard in this area.

This talk will give an overview of all the challenges in making live remote musical performances possible over the Internet, e.g.:
  • How can we establish a direct connection over multiple layers of NAT?
  • How can we handle lost packets?
  • What are the technical implications of varying latency and jitter for a musical performance?
  • How can the stream be securely encrypted?

The talk will provide established references to be used for further investigation, and several examples from open-source and commercial applications.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Stefano Zambon

Stefano Zambon

CTO, Elk
Wearing several hats in a music tech startup building Elk Audio OS. Loves all aspects of music DSP from math-intense algorithms to low-level kernel hacking for squeezing latency and performance.
avatar for Maxime Gendebien

Maxime Gendebien

Python Developer, Elk Audio
The road that led Max to be a full-time Python developer is not a straight one. Previous careers include jazz guitarist, recording engineer and mixing engineer which opened the doors of code through Arduino and Max-MSP. It's only after moving his family to Sweden that he fully committed... Read More →


Wednesday November 16, 2022 2:00pm - 2:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Unit testing the audio processors
The benefits of testing software are well-known. It uncovers bugs, enhances the development process, and helps write high-quality code. However, getting started with testing frameworks and unit testing can be complicated. The goal of this talk is to cover setting up a unit test framework for a JUCE plugin and getting started with the framework. During the talk, some practical unit tests will be presented. These example test cases range from simple sanity checks to verifying the audio processor output.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Esa Jääskelä

Esa Jääskelä

Software Developer, Buutti Oy
I'm an embedded software developer by profession, a musician by passion, and a hobbyist audio programmer because I'm hoping to find a way to combine these two things somehow. Lately, I've been focusing on Linux, C++ and testing.


Wednesday November 16, 2022 2:00pm - 2:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Tips from a hacker on license checking
Piracy remains a ubiquitous and persistent problem for audio developers attempting to commercialize their products. This is in part because most custom license checking schemes contain fairly basic mistakes that make it easy for crackers to bypass them. This presentation unveils tricks and techniques commonly used by crackers, walks through the design of several license checking schemes, and explains how they can be made more resistant to common attacks. While no scheme is completely uncrackable, improving security across the industry could make piracy much less prevalent.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
CK

Chase Kanipe

Chase Richard Kanipe


Wednesday November 16, 2022 2:00pm - 2:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

CHOC: Why can't I stop writing libraries or using backronyms?
You'd think that after spending most of my career from 2004 to 2020 writing C++ library code, I'd have managed to move on from doing libraries. Clearly not. This talk presents "CHOC" which is yet another C++ library that's hopefully going to be useful for a lot of audio developers.

CHOC is a free, header-only collection of C++ bits-and-bobs that began as a few classes that I stuck into a repo for personal use, but which has fattened-up into something that now seems worth presenting to the ADC crowd.

I'll try to make this talk interesting by showing how some of the seemingly obvious classes in CHOC actually represent years of hindsight and regret. Quite a few things in the library are my attempt to finally nail the design of functionality that I've implemented differently in the past, so I'll use this to illustrate the factors, thought processes and good coding practices that matter when writing good generic code.

If you want to know more about what's in CHOC, the github is here: https://github.com/Tracktion/choc

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Julian Storer

Julian Storer

CEO, Sound Stacks Ltd
Jules is a developer and founder who has created several audio technologies and companies in his 20+ year career. He's best known for creating JUCE and Tracktion, and is currently CEO of Sound Stacks Ltd.


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

LEGOfying audio DSP engines
Audio DSP is at the heart of music software and digital hardware, yet it is often overlooked in various ways by product makers. Developing high-quality DSP algorithms is indeed error-prone, time-consuming, it requires deep expertise in computer programming and extensive knowledge in a large number of other subjects. This easily translates into substantial costs when hiring capable DSP engineers or somehow acquiring the required skills. Furthermore, it can be difficult to objectively communicate desired sound characteristics and quantify sound quality. No wonder digital products had a bad reputation for a long time.

While there are some peculiarities that make music DSP development unfortunately more complicated than other engineering tasks, we can still apply lessons learnt in other fields by tastefully adapting them to our case. I'll describe two potential complementary approaches: on one hand I'll draw a parallel between software libraries, object-oriented programming abstractions, and patching systems (such as Pure Data and Max/MSP); on the other, I'll discuss DSP programming languages in their compositional aspects. Code reuse, and thus modularization, will be the common theme, and it will be tackled also from a "cultural" point of view.

Hopefully this talk will help:
  • startuppers to use their budgets more wisely, avoiding taking shortcuts that lead nowhere or spending too much on custom DSP development;
  • companies with little internal DSP knowledge or few resources to catch sudden market occasions;
  • established companies to rationalize effort and commit resources where it does really make a difference.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Stefano D'Angelo

Stefano D'Angelo

Founder CEO, Orastron Srl unipersonale
I am a music DSP researcher and engineer, as well as the founder and CEO of Orastron. I help companies around the world, such as Arturia, Neural DSP, Darkglass Electronics, and Elk, in creating technically-demanding digital synthesizers and effects. I also strive to push audio technology... Read More →


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

PANEL: Starting your first audio software company
What’s the best direction for you? Building products for clients or for end users? Or both?
- Key legal issues to consider as you start out – equity, IP, trademarks, licenses – and protecting your investment (IP and anti-piracy)
- Where do you get the initial money from?
- What should you build for my first product?
- Open-source alternative models
- Building your brand and your client list
- Learning from your mistakes and overcoming obstacles
- Optimising your marketing, and leveraging the different platforms that are available
- Building your team and identifying skills gaps
- How to set your rates – making sure you offer value whilst still being able to make a living!
- How to win investment and funding as you start to scale up
- Maintaining a healthy work/life balance

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Rebekah Wilson

Rebekah Wilson

CEO, Source Elements LLC
Rebekah is the technical co-founder and CEO who co-created the entire suite of Source Elements software. With a degree in music composition and a lifetime of love for technology, Rebekah has focused her career as a composer, electronic music researcher, and software developer - with... Read More →
avatar for Heather Rafter

Heather Rafter

Principal, RafterMarsh
Heather Dembert Rafter has been providing legal and business development services to the audio, music technology, and digital media industries for over twenty-five years. As principal counsel at RafterMarsh US, she leads the RM team in providing sophisticated corporate and IP advice... Read More →
avatar for Adam Wilson

Adam Wilson

Software Developer, Code Garden
Adam is the founder and chief of Node Audio, where he supports clients and developers creating awesome audio software. Between managing teams, coding and growing his business, he also makes music, and is an advocate of microtonality. In 2022 he launched a plugin called Entonal Studio... Read More →
avatar for Christian Luther

Christian Luther

Founder, Playfair Audio
Christian is an audio DSP expert based in Hannover, Germany. In the past, he has worked in R&D with brands such as Access, Kemper Amps and Sennheiser. In 2022, Christian founded his one-person audio plugin company Playfair Audio.
avatar for Marius Metzger

Marius Metzger

Entrepeneur, CrispyAudio
My name is Marius, I'm 23 years old and have a passion for product design, leadership, and, of course, software development.After finishing school at 16 years of age, I got right into freelance software development, with Google as one of my first clients.In 2020, I released a pitch... Read More →


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

PSOLA, ESOLA, MBROLA, oh my! Designing a pitch shifting algorithm from the ground up for use in a realtime vocal harmonizer instrument
Pitch shifting is a deep topic with many applications, from AutoTune to creating the voices of Alvin and the Chipmunks. There is a large body of scholarly work available on this topic, but most of it is filled with dense formulae incomprehensible to non-mathematicians. This talk will walk through each step in the pitch-shifting process, breaking down these formulas into simpler concepts and including code examples, concluding with how these concepts are being applied in a new vocal harmonizer instrument currently under development.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Ben Vining

Ben Vining

Independent Contractor
I'm a self-taught C++ developer. I love pitch detection and pitch shifting algorithms, but I also love working with CMake and devops systems.


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

3:50pm GMT

Break
Wednesday November 16, 2022 3:50pm - 4:20pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Allocations considered beneficial: How to allocate on the audio thread, and why you might want to
Audio programmers have traditionally been warned away from doing memory allocations on the audio thread, for very good reasons. But that has forced companies to limit feature sets and write unnatural code.

What if we could allocate freely in our audio code, and even reap some performance benefits from doing so? How would that enable us to improve in our products and our code?

In this talk we're going to take a whistlestop tour of C++'s std::pmr namespace, and discuss how to use it in the low-latency environment of audio processing.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Tom Maisey

Tom Maisey

Developer, Cradle
Tom Maisey is a programmer who has worked in audio for more than ten years, including at ROLI and Cradle. He spends most of his time thinking about how to make audio development more interactive and fun.


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

APPLE: Deploying v2 Audio Units as v3 Audio Unit Extensions
In this talk, we will demonstrate how an existing v2 Audio Unit can be deployed as a v3 Audio Unit Extension for added capabilities like iOS support and Objective-C / Swift compatibility. This can be done without modifying existing code by means of the AUAudioUnitV2Bridge wrapper class. First, we will show how to build a non-UI v3 Audio Unit Extension using the recently released AUGenericViewController for quick evaluation and testing. Then we will show how to reuse the Audio Unit's Cocoa UI on macOS, and finally how to add a new cross-platform AppKit / UIKit-based UI written in Swift.



IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Marc Boucek

Marc Boucek

Core Audio Software Engineer, Apple


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Building the Accelerated Audio Computing Industry of Tomorrow: GPU Audio
What do the GPU Industry and Pro Audio have in common? For years, the answer was “not much” to just about everyone. But the dreamers kept dreaming, and quietly, a solution was being built for many years. In 2020 during the height of the pandemic - an entirely remote team of professional musicians, engineers, computer scientists and GPU architects came out of the shadows to begin an open dialogue with both industries, about how it was time for audio processing to get a serious upgrade. In this brief promotional talk, the co-founders of GPU Audio (Alexander Talashov and Jonathan Rowden) will share an overview of their vision of building an accelerated audio computing industry niche powered by GPUs, and invite the spectator to a part of that journey. 
What's been done as of today?
- GPU Audio Tech Stack
- First products: Early access and beta 
- First reveals: New products and features

What will we do in the midterm?
- SDK release (premiering a portion here at ADC Hands-On)
- Spatial Audio (Mach1 technologies collaboration)
- More integrations 

How do we define the future of this technology?

What is required to bring real-time accelerated audio computing for ML-powered algorithms on GPU?
- Machine Learning Frontend, Backend Implementation and API
- Developer Community to Launch by 2023

Growth and Opportunity
- Partnership Opportunities and Hiring
- Vertical integration: how GPU Audio is impacting audio broadly

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
JR

Jonathan Rowden

co-founder and CBO, GPU AUDIO INC
Hello ADC community, my name is Jonathan Rowden and I am the CBO and co-founder of GPU AUDIO, a new core-technology company focused on unlocking GPU based parallel processing, for greatly accelerated real-time and offline DSP tasks. Our mission is to provide a new backbone of processing... Read More →


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Dante for Audio Developers
Learn how to use Dante Software Solutions by Audinate. In this session, Dante experts will provide an overview of the Dante solution and a deep dive into the software specific solutions available for Mac, Windows and Linux. Audinate will demonstrate a Dante module for JUCE. At the end, everyone present in-person will be provided with a FREE trial license!

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Lucas Moreno

Lucas Moreno

Senior Technical Sales Engineer, Audinate
Coming from the IT industry but having evolved into the AV one, I currently work for Audinate helping customers leverage their AV systems with Dante.
AS

Andy Saul

Staff Software Engineer, Audinate


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

5:00pm GMT

KEYNOTE: Incompleteness Is a Feature Not a Bug
If you’ve been in the music technology field for any length of time, you may have encountered the visual programming environment called Max. Maybe you’ve wondered what kind of people work on a computer program that doesn’t really seem to do anything?

In this talk David will share the unlikely story of Max’s transformative impact on both people and organizations, starting with his own life and that of his company Cycling ‘74. 25 years ago he was a reluctant entrepreneur with no real goals other than continuing to work on some cool software. Over time, he became more interested in exploring new ways of working, and realized that Max itself was an inspiration for the culture he was seeking as a software developer. Max's design and philosophy has allowed us to work as a fully remote team since the beginning with little need for planning and hierarchy. David will identify some properties common to both software and organizational architecture — many learned through trial and error — that seem to sustain creative flourishing of both people and teams. Some of these include learning to solve less than 100% of the problem, parameterizing interdependence and personal development, and building trust through innovation instead of rules. Finally, David will discuss some limitations of the Max approach and show how they’ve tried to address them in their most recent work related to code generation and export.  

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev



Speakers
avatar for David Zicarelli

David Zicarelli

CEO, Cycling ‘74
David Zicarelli is a computer programmer and improvising musician who designs interactive software to support creative expression. He has been working on Max and Max-related projects since the late 1980s. Prior to his Max life he created one of the first graphical voice editors for... Read More →


Wednesday November 16, 2022 5:00pm - 6:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

6:00pm GMT

Closing Address
Wednesday November 16, 2022 6:00pm - 6:15pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

6:15pm GMT

Evening Meal & Networking
Wednesday November 16, 2022 6:15pm - 7:30pm GMT
CodeNode 10 South Pl, London EC2M 7EB, UK

7:30pm GMT

Open Mic Night
The ADC Open Mic Night is back! A fun, informal evening with lightning talks, music performances, and some impromptu standup comedy.

If you are attending the ADC on site, you can contribute to the Open Mic night with a 5 minute talk or performance! Please use the sign up form here.

This is an event exclusively for on-site attendees. It won't be recorded, published, or streamed online.

Speakers
avatar for Pete Goodliffe

Pete Goodliffe

CTO, inMusic Brands
Experienced software developer, architect/product designer, leader, columnist, speaker, and author. Herder of cats and shepherd of products. Specialises in Music Industry projects, often involving high-quality C++ on desktop and embedded platforms, and iOS development. Currently... Read More →


Wednesday November 16, 2022 7:30pm - 9:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

9:00pm GMT

Networking
Wednesday November 16, 2022 9:00pm - 10:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK
 
  • Timezone
  • Filter By Date Audio Developer Conference Nov 14 -16, 2022
  • Filter By Venue London, UK
  • Filter By Type
  • In-Person & Online
  • In-Person & Online - Remote Speaker
  • In-Person Only
  • Online Only
  • Workshop
  • Audience


Twitter Feed

Filter sessions
Apply filters to sessions.