Loading…
Beginner [clear filter]
Monday, November 14
 

9:30am GMT

Online Workshop: Dynamic Cast: Practical Digital Signal Processing
ADC X Dynamic Cast - Practical Digital Signal Processing

What is a digital audio signal? How do we generate them and in what ways can we manipulate and extract useful information from them? In this workshop we'll be exploring the life cycle of an audio signal from a continuous acoustic signal to a discrete digital signal. We'll explore practical methods for processing and shaping audio including: 
  • Sampling theory
  • Filtering 
  • Block vs sample-based processing 
  • Moving between the time and frequency domain

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.
Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
avatar for Rachel Locke

Rachel Locke

Software Engineer, ChowdhuryDSP
avatar for Anna Wszeborowska

Anna Wszeborowska

Software Developer and Researcher, Freelance
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →


Monday November 14, 2022 9:30am - 12:30pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

9:30am GMT

Workshop: Dynamic Cast: Practical Digital Signal Processing

ADC X Dynamic Cast - Practical Digital Signal Processing

What is a digital audio signal? How do we generate them and in what ways can we manipulate and extract useful information from them? In this workshop we'll be exploring the life cycle of an audio signal from a continuous acoustic signal to a discrete digital signal. We'll explore practical methods for processing and shaping audio including: 
  • Sampling theory
  • Filtering 
  • Block vs sample-based processing 
  • Moving between the time and frequency domain

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.
Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Anna Wszeborowska

Anna Wszeborowska

Software Developer and Researcher, Freelance
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
avatar for Rachel Locke

Rachel Locke

Software Engineer, ChowdhuryDSP


Monday November 14, 2022 9:30am - 12:30pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Online Workshop: Dynamic Cast : Practical Software Engineering
ADC X Dynamic Cast - Practical Software Engineering

In the workshop we will share some techniques used in everyday programming work in order to prepare you for contributing to large code bases we deal with in professional contexts.

We will discuss how to read and analyze code, find entry points into a complex system in order to add features or debug problems as well as make sure your code is well-designed and therefore easy to maintain, change or build upon. We’ll also look into building and sharing your programs.

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.

Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
avatar for Rachel Locke

Rachel Locke

Software Engineer, ChowdhuryDSP
avatar for Anna Wszeborowska

Anna Wszeborowska

Software Developer and Researcher, Freelance
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →


Monday November 14, 2022 2:00pm - 5:00pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Workshop: Analog Circuit Modelling for Software Developers using the Point-To-Point Library

During this workshop, participants will learn about digital modeling of analog circuits. This will be applied to the creation of several JUCE plug-ins. Traditional modeling techniques will be discussed along with the presentation of a circuit analysis library which automates the modeling process. This library, called "Point-To-Point Modeling," is intended for audio software developers interested in rapid prototyping and implementation of circuit modeling. Example JUCE plug-ins using the Point-To-Point library will be demonstrated, along with the process of quickly converting arbitrary schematics into C++ code.

  • Attendees should have some experience using JUCE
Code repository for the workshop:
https://github.com/HackAudio/PointToPoint_LT
Code repository as an additional resource:
https://github.com/HackAudio/PointToPoint_MATLAB

Speakers
avatar for Eric Tarr

Eric Tarr

Engineer, Hack Audio
Dr. Eric Tarr teaches classes on digital audio, computer programming, signal processing and analysis at Belmont University. He received a Ph.D., M.S., and B.S. in Electrical and Computer Engineering from the Ohio State University. He received a B.A in Mathematics and a minor in Music... Read More →


Monday November 14, 2022 2:00pm - 5:00pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Workshop: Dynamic Cast : Practical Software Engineering

ADC X Dynamic Cast - Practical Software Engineering

In the workshop we will share some techniques used in everyday programming work in order to prepare you for contributing to large code bases we deal with in professional contexts.

We will discuss how to read and analyze code, find entry points into a complex system in order to add features or debug problems as well as make sure your code is well-designed and therefore easy to maintain, change or build upon. We’ll also look into building and sharing your programs.

Dynamic Cast - Who Are We?

Dynamic Cast is a peer-to-peer C++ study group, a safe space for underrepresented groups (women, LGBTQIA+, minority ethnic). Both Dynamic Cast workshops at ADC are designed to create an entry point to the industry for newcomers, everyone is welcome.

Requirements for this Workshop
TBA > Keep an eye out for an email closer to the event.

A laptop and paper/pen would be beneficial, but no one will be turned away.

Speakers
avatar for Anna Wszeborowska

Anna Wszeborowska

Software Developer and Researcher, Freelance
Anna is a freelance software developer and a PhD student at the Creative Computing Institute, University of the Arts, London.She’s worked on music production and live performance tools for the last 8 years. During her time at Ableton she contributed to the integration of the company's... Read More →
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd
avatar for Rachel Locke

Rachel Locke

Software Engineer, ChowdhuryDSP


Monday November 14, 2022 2:00pm - 5:00pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK
 
Tuesday, November 15
 

9:00am GMT

Implementing Real-Time Parallel DSP on GPUs
GPU powered audio has long been considered something of a unicorn in both the pro audio & accelerated computing industries alike. The implications of powering accelerated DSP via a GPU’s parallel architecture is simultaneously exciting and incredibly frustrating; to many it would seem that the ease of which they handle massive amounts of tasks is rivalled only by the difficulty of understanding their architecture, in particular for the average DSP developer. Until now, the state of research has always concluded that because of heavy latency and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. This is no longer the case.

The implications and use-cases are great: ultra fast plugins, scalable power, hundreds or even thousands of channels at low latency, exponentially better software performance (10x-100x), cloud processing infrastructure, accelerated AI/ML and more. GPUs can now offer a bright future for DSP. In this talk we will share about the challenges and solutions of GPU based DSP acceleration. 

  1. Why GPUs?
  2. 3 Challenges of GPU-based Audio Processing
    - Parallelism and Heterogeneity
    - Multiple Tracks and Effects
    - Data Transfer Problems: GPU <> CPU
  3. Core Component Overview: The Scheduler
    - Host Scheduler and Device Scheduler
    - How Scheduler Addresses the “3 Challenges”
  4. Some Examples: FIR and IIR Algorithms - Can They Be Parallelized?
    Algorithmic and Platform Optimization
    GPU Audio Workflow Schematics
    - GPU Audio Component
    - DSP API
    - Processor API
    - DSP Components Library
  5. Roadmap and Some Use Case Considerations
  6. Q&A and Invitation to Training Lab (Gain, IIR and FIR Convolver Hands-On Training Lab)

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Rumen Angelov

Rumen Angelov

Plugin Development Team Lead, GPU Audio
I've completed my education in Music And Audio Technology at the Bournemouth University, Dorset. Primarily experienced in audio plugin development for both Microsoft and Apple operating systems and the major plugin formats. Briefly worked on audio processing for proprietary ARM-based... Read More →
avatar for Andres Ezequiel Viso

Andres Ezequiel Viso

Product Manager, Braingines SA / GPU Audio Inc
I studied Computer Science at the University of Buenos Aires and received my PhD on semantics for functional programming languages. I did a posdoc at Inria, France, in the context of the Software Heritage project, developing the provenance index for the SWH Archive. My interest vary... Read More →


Tuesday November 15, 2022 9:00am - 9:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

9:00am GMT

Introduction to the Audio Definition Model and its use for Spatial Audio and Next Generation Audio experiences
Spatial Audio has gone way beyond the good old 5.1 system and is reaching mainstream audiences through mobile phones or TV soundbars. The Spatial audio content creation ecosystem and workflows are now developing at a rapid pace around the ADM-BW64 file format, now supported by major DAWs. This talk aims at presenting the key concepts of the Audio Definition Model (ADM), and its benefits for studio, live and broadcast workflows, and why it is interesting as an inter-operable model for spatial audio in general. Beyond spatial audio, ADM also enables use-cases such as audio personalization and interactivity, which will be highlighted.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Guillaume Le Nost

Guillaume Le Nost

Managing Director, L-Acoustics
Shaping the future of live sound with immersive audio technologies and innovatives Sound Experiences.Interests in spatial audio, object-based audio, creative technologies, music technology and live sound.Keen musician (flute, bass, piano).
DM

David Marston

Senior R&D Engineer, BBC


Tuesday November 15, 2022 9:00am - 9:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Better Adaptive Music for Starship Troopers
Kejero's glossary: kejero.com/adc

Today's common adaptive music techniques often still fall short when it comes to melodic or orchestral video game scores.

Can you hear the transition to another track? Or an instrument fading in? That's the sound of a system at work.

In this talk, Kejero will explain how he designed a system that doesn't use transitions. He will show how they implemented it in Starship Troopers: Terran Command, which required a massive orchestral score. Music that can switch between calm and chaotic in a heartbeat, yet still sounds like one cohesive, intentional piece of music.

Outline:
  • Level 0: Existing Techniques
  • Level 1: A New Foundation
  • Level 2: An Intelligent Conductor
  • Level 3: Extended Techniques
  • Two-way Communication
  • 10 Extremely Practical Tips & Tricks

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Kejero

Kejero

Composer, Kejero
Kejero (www.kejero.com) recently provided the massive orchestral score for Starship Troopers: Terran Command. He reinvented video game scoring with his Better Adaptive Music software. With BAM, the music can switch between calm and chaotic in a heartbeat, yet still sound like one cohesive, intentional pie... Read More →


Tuesday November 15, 2022 10:00am - 10:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Detaching the UI - Options and challenges for controlling headless and remote audio software
Audio software running in “headless” or remote contexts, i.e. without access to a tightly integrated GUI, is increasingly common, either when running in embedded devices, on a remote cloud server, or distributed over a local network where remote/automated control is desired.

The parameter controls exposed over plugin API’s are insufficient, since the practically usable implementations only support a fraction of the variety necessary. Developers expose many additional controls over the GUI, which doesn’t translate to headless or remote uses.

Plugin GUI’s can be as involved as a fully-fledged DAW, exposing complex interactions. Although MIDI 2.0 will in the medium term replace some of what we discuss, even with full adoption, it doesn’t cover the many interactions possible from a GUI.

In this article we discuss three basic Distributed Systems patterns, for controlling audio software during run-time over a network: simple socket messaging, request/response, and publish/subscribe.

We also demonstrate their implementation using the OSC and gRPC frameworks, discussing challenges and best practices specific to real-time audio.

Grounding the above, we provide a pair of ready to use, fully fledged open-source applications implementing our suggestions, both available to download.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Ilias Bergström

Ilias Bergström

Senior Software Engineer, Elk Audio
Senior Software Engineer, ElkComputer Scientist, Researcher, Interaction Designer, Musician, with a love for all music but specially live performance. I've worked on developing several applications for live music, audiovisual performance, and use by experts, mainly using C++.I get... Read More →
avatar for Gustav Andersson

Gustav Andersson

Senior Software Developer, Elk Audio
Will code C++ and python for fun and profit. Developer, guitar player and electronic music producer with a deep fascination with everything that makes sounds in one form or another. Currently on my mind: modern C++ methods, DSP algos, vintage digital/analog hybrid synths.


Tuesday November 15, 2022 11:20am - 12:10pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Trying to create an Audio Software House
Creating audio software is hard. Creating an audio software company is even harder. This session will cover our experience, going from a single dev to 20 devs, working in multiple teams, and all the pains and hacks that we had in the process: pipeline, tests, management, tools, processes, monorepo, etc. This is not about what is the correct way of doing things, but about a journey and the choices and pains of that journey.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Nuno Fonseca

Nuno Fonseca

CEO, Sound Particles, S.A.
avatar for Vitor Carreira

Vitor Carreira

CTO, Sound Particles, S.A.


Tuesday November 15, 2022 11:20am - 12:10pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Connecting Audio Tools for Game Development
It is common for game titles to include more than 50000 audio files, ranging from sound effects, dialogue lines and music clips. One of the challenges, for game developers, is to build an efficient pipeline for creating, organizing and processing such a massive number of audio files, generally coming from different tools. 
For instance, REAPER has become a popular choice in the game audio community as a sound design DAW. Indeed, it can handle large projects and can be adapted and integrated into different workflows. How can we help the creators focus on content and game experience by reducing repetitive and error prone tasks that are currently part of their daily work?
In this talk, we explore the technologies (Juce, WAAPI) and the architecture of ReaWwise, a Wwise integration into REAPER, focused on automation. We discuss how we can connect different tools, including other DAWs, to Wwise from various environments. We will also take an in-depth look at how we implemented WAAPI and WAQL as the core of the extensibility of Wwise.
  • Presentation of speakers
  • Quick overview of what is Wwise 
    • Interactive Audio Challenges
  • Common sound designer example
    • Reviewing pain points
  • Quick demo of what is accomplished by ReaWwise
    • Create object structures/hierarchies and wwise
    • Import audio files into wwise
  • REAPER - Why?
  • ReaWwise Tech Overview
    • CMake
    • Juce
    • WAAPI
    • Components: GUI, DawContext, Data
    • Evaluating how other DAWs could be extended?
  • WAAPI Tech Overview
    • Architecture
    • WAQL: a query language
    • Use cases - Examples
    • Benefits of a generic data model
  • Closing Remarks
    • The project being open source
    • Exposing WAAPI to ReaScript/Lua


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Bernard Rodrigue

Bernard Rodrigue

Director, Wwise Experience, Audiokinetic
Bernard Rodrigue is Director, Wwise Experience at Audiokinetic. He joined Audiokinetic in 2005 and actively participated in developing the foundations of Wwise. Today, Bernard continues to lead several projects related to the advancement and expansion of Wwise.
avatar for Andrew Costa

Andrew Costa

Software Developer, Audiokinetic
Andrew Costa is a Software developer at Audiokinetic since 2021. He’s been working on daw extensions and plugins, namely ReaWwise and the Wwise VST plugins. He's passionate about software development, daws and music production.


Tuesday November 15, 2022 11:20am - 12:10pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

10 Things Every ARA Programmer Should Know
ARA (Audio Random Access) is an API created by Celemony and PreSonus to enable a new class of audio plug-ins that are not used in realtime effect slots, but instead are tied into the arrangement of the DAW.

It is designed for plug-ins such as Melodyne which intrinsically need to evaluate the audio material in its entirety, not sliced into small realtime buffers. In addition to providing random access to the audio samples, ARA enables bi-directional communication about musical properties such as tempo maps, time and key signatures, or chord progressions of both the original audio material and the playback context.

Rather than doing a detailed dive into the API, the talk will focus on several core concepts of ARA that have a profound impact on the design of your code. It strives to give you a better idea about both the features and workflows that users will expect from ARA products, and the costs and liabilities involved. It will enable you to make an educated decision about whether or not ARA is the right tool for your product, and get you started with the right mindset should you go for it.

If you’re interested in this talk, please also note the follow-up session at 4:20 which will demonstrate how ARA is integrated into JUCE.


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Stefan Gretscher

Stefan Gretscher

ARA lead developer, Celemony Software GmbH
Stefan's career in audio programming has led him from hand-crafting bare-bones assembler on the DSP-based platforms of the late 90s to working on today's Melodyne with its roughly 250k lines of just the audio model and processing C++ code. Along that path, his focus shifted from signal... Read More →


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Announcing SoundStacks' New Cmajor Platform
The SoundStacks team will be announcing and demonstrating their new Cmajor platform. Cmajor is our new language and platform for audio development, offering great performance and easy development for both beginners and professional DSP programmers. Join us for the great reveal!
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Julian Storer

Julian Storer

CEO, SoundStacks
I'm the creator of Tracktion, JUCE and Cmajor
avatar for Cesare Ferrari

Cesare Ferrari

CTO, Sound Stacks
avatar for Lucas Thompson

Lucas Thompson

Senior Software Engineer, Native Instruments
avatar for Harriet Drury

Harriet Drury

Junior Software Engineer, Sound Stacks Ltd


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Live interview with Niklas Odelholm, VP, Softube
Bobby Lombardi (PACE and ADC Chair) will interview Niklas, one of the original four founders of Softube, and currently VP of products. We'll hear about Niklas' history with music and computer technology, and how that lead him to meet the other founders of Softube. We'll cover the early years from a small start-up navigating the evolving technologies and getting their modeling technology into the hands of third-party partners, to their eventual move into developing, marketing, and selling products under their own branding, with an impressive and expanding portfolio of officially licensed partners. We take a technical dive into the core fundamentals of component level analog modeling, and the challenges of accurately replicating classic vintage characteristics, dynamics, and sound. We'll also discuss signal processing, the ongoing challenge of staying ahead of the curve with cross-platform and DSP development, and address the importance of end users involvement in product design and optimisation.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Niklas Odelholm

Niklas Odelholm

VP, Softube
Niklas started his career as a signal processing engineer in 2003, but has over the years worked in almost every role at Softube. He is currently VP of Products, working on the big picture (strategy) but is the happiest when he can be creative with algorithms or interface design... Read More →


Tuesday November 15, 2022 12:20pm - 12:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Case Study: Eliminating C++ Undefined Behavior, Plug-in Contract Violations, and Intel Assumptions in a Legacy Codebase
For large C++ audio-plugin codebases, adding support for a new platform (such as Apple Silicon/ARM) can be a scary, expensive endeavor. One of the biggest causes for alarm is C++ undefined behavior (UB), which is an unfortunate part of many legacy codebases. After a brief review of what undefined behavior (UB) is we will discuss what issues it can cause and why it should be avoided.  We'll also discuss how paying attention to the details of audio plug-in format "contracts", particularly in regards to threading, can simplify the process of supporting new platforms and new DAWs. Finally, we’ll go over the specific cultural and tooling initiatives we used to eliminate bad behavior in our audio plug-in codebase, including how we used static analysis, plug-in validators, and clang runtime sanitizers to identify and address issues. We hope attendees leave the session with actionable ideas for how to address these sorts of issues in their own codebase.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

PANEL: Tabs or Spaces?
A group of opinionated expert programmers will argue over the right and wrong answers to a selection of programming questions which have no right or wrong answers.

We'll aim to cover a wide range of topics such as: use of locks, exceptions, polymorphism, microservices, OOP, functional paradigms, open and closed source, repository methodologies, languages, textual style and tooling.

The aim of the session is to demonstrate that there is often no clear-cut best-practice for many development topics, and to set an example of how to examine problems from multiple viewpoints.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Julian Storer

Julian Storer

CEO, SoundStacks
I'm the creator of Tracktion, JUCE and Cmajor
avatar for David Rowland

David Rowland

CTO, Tracktion
Dave Rowland is the CTO at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio applications utilising JUCE... Read More →


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Anatomy of a Bare Metal Synth
This talk is aimed at any embedded-curious audio software developers who have primarily done native software development, but are interested in what goes into building standalone music gizmos like digital synthesizers, guitar pedals, or other noisemakers. Using the Daisy platform as context, we will discuss a number of fundamental bare metal concepts such as serial communication protocols (MIDI!), direct memory access, serial audio interfaces, and general purpose input/output.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Jack Campbell

Jack Campbell

Senior Software Engineer, Universal Audio
I have had a lot of fun writing audio software for the last five years. I mostly write audio stuff for Universal Audio native plugins these days, but I'm excited to share what I've learned about embedded audio programming via the deep dives I've done on the side. (And for UA in a... Read More →


Tuesday November 15, 2022 2:00pm - 2:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

Develop, Debug and Deploy: MIDI 2.0 Prototyping and Tools
MIDI 2.0 extends MIDI in many ways: more channels, higher resolution, jitter reduction, auto-configuration via bidirectional transactions.

Core members of the MIDI Association present an overview of the available tools for developing, debugging, and deploying MIDI 2.0 products.

A number of tools have been developed to jump-start prototyping and validation of UMP functions and fuel the transition to MIDI 2.0. These tools include software applications for implementing and debugging UMP software and hardware, and testing MIDI-CI implementations.

All tools will be shown in action and basic usage will be explained.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Tuesday November 15, 2022 3:00pm - 3:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

Real-time interactive synthesis with ML: differentiable DSP in a plugin
2022 has been an exciting year for machine learning (ML) and realtime neural audio synthesis. We share how we achieved real-time differentiable DSP (DDSP), in an expressive and transcultural instrument, operating with low latency in DAWs and mobile devices worldwide. Our talk will cover:
  • building an intuition for DDSP and the capabilities of an audio machine learning system,
  • what problems we faced and how we achieved real-time DDSP in both pro-audio and consumer applications,
  • how we facilitated creative musical expression in our ML system,
  • how we approached testing, and
  • how we see realtime ML audio processing in future.


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers

Tuesday November 15, 2022 3:00pm - 3:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

Combining serverless functions with audio VST plugins
VST plugins allow us to handle virtual instruments and alter audio signals to create impressive effects in our audio projects. Once the user finds the perfect parameters to sound just the way he wants, the VST allows the user to save these parameters locally in XML files.

But what if the user wants the same settings on another device, study, or share them? We can replicate them, but why not save these parameters in the cloud and have our collection always available. This workshop will review how to create a serverless project with AWS Amplify, GraphQL and Cognito user authentication for creating our VST parameter library in the cloud.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers

Tuesday November 15, 2022 3:00pm - 3:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Automating Audio Device Testing - A brief existential guide
This will be a whistlestop tour of how automated audio interface testing began at Focusrite.  Be prepared for big questions, big numbers, and a chance to win some Focusrite T-shirts.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Joseph Deller

Joseph Deller

Senior Software Developer In Test Automation, Focusrite Audio Engineering Ltd.
Joe has been recording music and playing in bands since he was a teenager, starting with a valve reel-to-reel recorder.  He started coding on a ZX81, worked for software companies large and small, and ran a recording studio in Oxford.  Joe combines his love of electronics, music... Read More →
avatar for Jake Wignall

Jake Wignall

Software Developer, Focusrite
After being fascinated by the equipment in studios he visited with his teenage band, Jake decided to pursue a degree in music technology where he was first introduced to the world of programming. He managed to get a placement with Focusrite's QA Team and has worked as a Software Developer... Read More →


Tuesday November 15, 2022 4:20pm - 4:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

5:00pm GMT

KEYNOTE: The Musical Instruments of Star Trek
In the futuristic universe of Star Trek there are a lot of musical instruments, and many of them
using far-future technology. The designers of these instruments never intended them to actually
work, and were therefore led by their imaginations and not by the limitations of earthly
technology – the opposite of the instrument design process today, where the design process
tends to be heavily influenced by the affordances of the technology we have to hand.
In this talk music technology researcher and theorist Astrid Bin explains how she explored this
imagination-first process of instrument design by recreating an instrument, as faithfully as
possible, from the show. Through the process – from discovering the instrument, to getting input
from the show's original production designer, to figuring out how to make the instrument's
behaviour true to the original intentions (but using primitive 21st century embedded sensors and
computers) – she describes what she learned about designing real digital musical instruments
through trying to recreate an imaginary one.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Astrid Bin

Astrid Bin

Ableton AG
Astrid Bin is a music technology researcher and theorist based at Ableton in Berlin. She is also a founding developer of Bela.io, the platform for creating beautiful interaction. She spends her time writing, playing drums, making instruments, and trying to make what is perfect more... Read More →


Tuesday November 15, 2022 5:00pm - 6:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK
 
Wednesday, November 16
 

9:00am GMT

Apple, Google and Microsoft Implementations of MIDI 2.0
Engineers from Apple, Google, and Microsoft will present the current state of MIDI 2.0 implementations in their operating systems. We’ll describe the API changes required for MIDI 2.0 for each platform as well as discuss the philosophy and reasoning behind various design decisions. We’ll also present the status of transports, such as USB and Ethernet. If you’re a developer who is interested in the practical implementations of MIDI 2.0, this is the session for you.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Pete Brown

Pete Brown

Principal Software Engineer, Windows, Microsoft
Pete works in the Windows + Devices org in Microsoft, primarily focusing on partners, apps, and technology for musicians. He's the lead for the Windows MIDI Services project which is bringing an updated MIDI stack to Windows, and adding full MIDI 2.0 support. He's also serves as the... Read More →
avatar for Phil Burk

Phil Burk

Staff Software Engineer, Google Inc
Music and audio software developer. Interested in compositional tools and techniques, synthesis, and real-time performance on Android. Worked on HMSL, JForth, 3DO, PortAudio, JSyn, WebDrum, ListenUp, Sony PS3, Syntona, ME3000, Android MIDI, AAudio, Oboe and MIDI 2.0.
avatar for Torrey Holbrook Walker

Torrey Holbrook Walker

Audio/MIDI Framework Engineer, Apple Inc.
I am a senior software framework engineer on the Core Audio team at Apple and a frequent MIDI specification contributor and prototyper with the MIDI Association. I have been passionate about creating music production technologies that delight audio software developers, musicians... Read More →
avatar for Mike Kent

Mike Kent

Chief Strategy Officer, AmeNote Inc.
Mike Kent is the Co-Founder and Chief Strategy Officer of AmeNote Inc. Mike is a world leader in technology for musical instruments and professional audio/video. Mike is the Chair of the MIDI 2.0 Working Group of the MIDI Association. He is a co-author of USB MIDI 1.0, the principal... Read More →


Wednesday November 16, 2022 9:00am - 9:50am GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Jumpstart Guide To Deep Learning In Audio For Absolute Beginners: From No Experience And No Datasets To A Deployed Model
Deep learning is becoming more and more significant in all areas of science including audio processing. Yet a lot of people have a hard time understanding it or are too scared to start learning it altogether. Is deep learning really so difficult that you need a PhD to use it? Does it truly require huge datasets and gigantic computational clusters? Is it possible to deploy neural networks in real-time audio plugins? In this talk, I will show you how you can learn deep learning for Virtual Analog modeling of audio effects fast, for free, without a PhD, without any special equipment or loads of data, and deploy your deep learning model in an audio plugin.

What you will learn:
  • what are the 4 biggest myths concerning deep learning?
  • how to learn deep learning for audio fast in 4 simple steps for free
  • where to find and how to synthesize a dataset to model your analog device of choice
  • how to train your first deep learning model for audio using the basics of PyTorch and without a computational cluster
  • how to deploy your model in a real-time audio plugin

The presentation will feature a live demo of setting up a deep learning pipeline and training a neural network for Virtual Analog modeling of a distortion effect.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Jan Wilczek

Jan Wilczek

Lead Educator & Software Consultant, WolfSound
Jan Wilczek graduated with honors from Friedrich-Alexander-Universität Erlangen-Nürnberg, having completed a master’s program Advanced Signal Processing and Communications Engineering. He is an Audio Developer of Music Maker JAM at Loudly GmbH in Berlin; an app to make loop-based... Read More →


Wednesday November 16, 2022 10:00am - 10:50am GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

10:00am GMT

WebAudio Modules 2.0: audio plugins for the Web Platform!
Web Audio Modules 2.0 (WAM) is the latest version of an open source audio plugin standard for the web platform, developed since 2015 by a group of academic researchers and developers from the computer music industry. Version 2.0 enables the development of audio effects, instruments, and MIDI controllers as plugins and compatible hosts and takes into account recent evolution in the development of web technologies. Indeed, since 2018 W3C Web standards have matured: the appearance of WebAssembly, stabilization of WebComponents, support for AudioWorklets [1] in the Web Audio API, and continued evolution of JavaScript have all helped make professional-grade, Web-based audio production a reality. In addition, commercial companies now offer digital audio workstations (DAW) on the Web which act as host Web applications and support plugins [2] (including WAM ones). Taking into account these developments and the feedback received from developers over the past few years, we released “Web Audio Modules 2.0” (WAM2), an open source SDK and API distributed as four GitHub repositories (https://github.com/webaudiomodules) and as npm modules (MIT License, see https://www.npmjs.com/search?q=keywords:webaudiomodules). WAM2 now supports parameter automation, plugin groups, audio thread isolation, midi events, plugin/host extended communication. WAM2 is Web-aware: plugins can be loaded and instantiated by hosts using a simple URI using dynamic imports.

One of the repository, wam-examples, comes with more than 20 examples of plugins, written using different languages and building chain. It can be tried online here: https://mainline.i3s.unice.fr/wam2/packages/_/
WebAudio Modules 2.0 comes also with more extended examples such as a guitar effect pedalboard plugin ( https://wam-bank.herokuapp.com/), an open source DAW prototype (https://wam-openstudio.vidalmazuy.fr/), a collaborative sequencer (sort of Ableton Live meets Google Docs) entirely developed with WAM2 (https://sequencer.party/), which comes with more than 20 open source WAM2 plugins.

Furthermore, the FAUST online IDE (https://faustide.grame.fr/) can now compile FAUST code into a WAM2 plugin, including GUIs and online publication for reuse in any compatible host (tutorial here: https://docs.google.com/document/d/1HDEm4m_cD47YBuDilzGYiANYQDktj56Njyv0umGYO6o/edit?usp=sharing-)

In this talk, we propose to present the WAM2 proposal, illustrated by many interactive demonstrations of plugins and host, including open source and commercial ones.

[1] H. Choi. Audioworklet: the Future of Web Audio. International Computer Music Conference ICMC 2018.
[2] M. Buffa, J. Lebrun, S. Ren, S. Letz, Y. Orlarey, and al.. Emerging W3C APIs opened up commercial opportunities for computer music applications. The Web Conference 2020 - DevTrack, Apr 2020, Taipei.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Michel Buffa

Michel Buffa

Professor, Université Côte d'Azur
Michel Buffa is a professor/researcher at University Côte d'Azur, a member of the WIMMICS research group, common to INRIA and to the I3S Laboratory (CNRS). He contributed to the development of the WebAudio research field, since he participated in all WebAudio Conferences, being part... Read More →


Wednesday November 16, 2022 10:00am - 10:50am GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

10:00am GMT

Parameter Inference of Music Synthesizers with Deep Learning
Synthesizers are crucial for designing sounds in today's music. However, to create the desired sound texture by tuning the right synthesizer parameters, one requires years of training and in-depth domain experience on sound design. Music producers might also search through preset banks, but it takes extensive time and effort to find the best preset that gives the desired texture.

Imagine a program that you can drop your desired audio sample, and it automatically generates the synthesizer preset that could recreate the sound. This task is commonly known as "parameter inference" of music synthesizers, which could be a useful tool for sound design. In this talk, we will discuss how deep learning techniques can be used towards solving this task. We will cover recent works that use deep learning to perform parameter inference on a variety of synthesizers (FM, wavetable, etc.), as well as the challenges that were faced in solving this task.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Hao Hao Tan

Hao Hao Tan

Software Engineer, BandLab


Wednesday November 16, 2022 10:00am - 10:50am GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Fast, High-quality Pseudo-random Numbers for Audio Developers
Many C++ developers reach for std::rand() the first time they need a pseudo-random number. Later they may learn of its downsides and include <random> to use the Mersenne Twister (std::mt19937) and various distributions from the standard library. For some the journey ends here, but for others questions arise: How should I properly seed my generators? How should I approach portability? Does std::mt19937 failing some statistical tests matter to me? Am I leaving performance on the table using std::mt19937? What quality do I need for my use-case and how can I get the best deterministic performance for that quality?

After a brief introduction to generating pseudo-random numbers with the C++ standard library this talk will look at answering these questions in digital audio applications; the same learnings could be applied elsewhere such as games, graphics, or some simulations. We will examine some benchmarks and quality analysis of standard library pseudo-random number generators and modern generators outside the standard. We will close with a demonstration of ways to make runtime-performance determinism improvements with minor quality loss over using standard library distributions.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Roth Michaels

Roth Michaels

Principal Software Engineer, Native Instruments
Roth Michaels is a Principal Software Engineer at Native Instruments, an industry leader in real-time audio software for music production and broadcast/film post-production. In his current role he is involved with software architecture and bringing together three merged engineering... Read More →


Wednesday November 16, 2022 11:20am - 12:10pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

11:20am GMT

PANEL: Accessibility: Will You Be Next?
This session, hosted by music producer, audio engineer and accessibility consultant Jason
Dasent, brings together key players from across the music industry, all with a
passion for accessibility. The event will take the form of a discussion panel
focusing on the current state of accessibility as it relates to music technology, as well as how
we can work together to take it to the next level.

Topics that will be covered include: The advancements made by several music equipment
manufacturers in the last 2 years; how to inspire other music equipment manufacturers to
make their products and services accessible; marketing opportunities for companies that
make accessible products, and how we bridge the gap between able-bodied and
differently abled music industry practitioners, leading to more collaborations and
employment opportunities for professional differently abled practitioners.
The event will culminate in a 20-minute performance, showing the latest in accessible music
tech from keyboards to groove stations, to a fully accessible mixing system.

Throughout the conference, attendees are encouraged to visit the “Will You Be Next?”
Accessibility Zone, where they can meet software engineers and managers from a variety of
companies who are all already involved in accessible music tech. Visitors to the Accessibility Zone will be
invited to get hands-on with all the accessible equipment that will be on display. Attendees
will also be able to experience music production from recording to mastering, all with accessible equipment.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Jason Dasent

Jason Dasent

Owner & CEO, Studio Jay Recording
Jason Dasent has over 25 years’ experience in all aspects of recording and music production. Jason launched Studio Jay Recording in Trinidad in 2000 catering to both the Advertising Sector and Artist Production for many top Caribbean recording artists. He has done Music Scores... Read More →
avatar for Quintin Balsdon

Quintin Balsdon

Software Engineer - Accessibility, Spotify
Quintin has been an Android developer since 2011 and works for Spotify in the accessibility team. He is currently focusing on projects to make accessibility part of the standard development process by creating custom developer tools and apps, like TalkBack for developers, which allows... Read More →
avatar for Mary-Alice Stack

Mary-Alice Stack

Chief Executive, Creative United
JC

James Cunningham

Queen's University Belfast
avatar for Grace Capaldi

Grace Capaldi

Director, Grinning Dog Records/Echotown Studio
Grace is new to the industry and she is enjoying the ride. She co-runs a record label and accessible recording studio in Dorset with her husband and sister. The studio was a complete new build and Grace co-project managed every step of the process, being a permanent wheelchair user... Read More →
avatar for Harry Morley

Harry Morley

Software Developer, Focusrite
Harry has been a software developer at Focusrite for 4 years. He mainly works on C++ software that interacts with audio hardware, such as the Vocaster and Scarlett interfaces. Harry loves talking all things music, creativity and accessibility. Before Focusrite, Harry studied MA Computational... Read More →


Wednesday November 16, 2022 11:20am - 12:10pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Pipeline for VA Modelling with Physics-Informed Machine Learning
Wave Digital Filters and neural networks are two popular solutions for circuit modelling. In this presentation, we demonstrate a pipeline that makes use of our Differentiable Wave Digital Filters library. A dataset was collected from a diode clipper circuit and, with the library, was used to train a real-time deployable model. The trained model has higher accuracy and similar computation time when compared to traditional white-box models. We present this methodology to demonstrate the objective qualities and advantages of this approach.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
CC

Christopher Clarke

PhD Candidate, Christopher Clarke
Christopher Clarke is a PhD candidate studying at SUTD (A.I/Machine Learning)


Wednesday November 16, 2022 11:20am - 12:10pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

11:20am GMT

Point to Point Modeling: An Automatic Circuit Solving Library
The "Point to Point Modeling" library is a software tool for simulating analog circuits. It is intended for audio signal processing applications, such as real-time plugins and mobile apps. In the library, component-level circuit analysis is automated, allowing for arbitrary circuits to be easily implemented based on resistors, capacitors, potentiometers, op-amps, diodes, transistors, and tubes. In addition to solving programmer-specified circuits, the library also includes 150+ pre-made circuits common in audio EQs, consoles, effect pedals, guitar amps, and more. Implementations of the library are available in C++ for commercial software as well as in MATLAB for rapid prototyping. An overview of the library's API for developers will be provided, along with some examples.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev
Code repositories discussed in presentation:
https://github.com/HackAudio/PointToPoint_LT
https://github.com/HackAudio/PointToPoint_MATLAB1

Speakers
avatar for Eric Tarr

Eric Tarr

Engineer, Hack Audio
Dr. Eric Tarr teaches classes on digital audio, computer programming, signal processing and analysis at Belmont University. He received a Ph.D., M.S., and B.S. in Electrical and Computer Engineering from the Ohio State University. He received a B.A in Mathematics and a minor in Music... Read More →


Wednesday November 16, 2022 11:20am - 12:10pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Building Tomorrow’s Audio Workflows with Pro Tools SDKs
Over the years, many software developers have figured out ingenuous ways to integrate with the Pro Tools environment to deliver some of today’s most compelling workflow-enhancing tools, relied on by audio professionals the world over. But these necessary workflows far too often came at the cost of performance and reliability. Recognizing the huge potential for innovation driven by third-party audio developers, the Pro Tools team embarked on a journey to create and deliver a wide spectrum of APIs to facilitate the creation of tomorrow’s audio and music workflows. In this session, we will give ADC 2022 attendees an early look at upcoming Pro Tools APIs, allowing for scripting and automation of the Pro Tools application, as well as present concepts of additional SDKs in the works (Satellite SDK, Clip FX SDK, Session SDK and more). The Avid audio team is committed to a new era of openness, and we hope this session will drive much needed conversation around the needs of the community for deeper integration with Pro Tools.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Francois Quereuil

Francois Quereuil

Director, Audio Product Management, Avid


Wednesday November 16, 2022 12:20pm - 12:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

12:20pm GMT

Interactive Ear Training using Web Audio
This talk aims at showing how we can use web technologies like the Web Audio API and Canvas API to bring gamification in the field of web-based audio tutorials. By using the Web Audio API native audio nodes, we will show how we can build a simple, yet fully interactive ear training application for educational purposes.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Fabrice Dos Santos

Fabrice Dos Santos

Web Engineering Manager, Slate Digital
Fabrice is a Web Engineering Manager at Slate Digital. His team of Web Developers aims at providing online services in addition to Slate Digital plugins and they are deeply interested in Web Audio as a complementary field of expertise for music software development. Fabrice has previoulsy... Read More →
avatar for Charlène Queffelec

Charlène Queffelec

Senior UX Designer, Slate Digital
Charlene is a senior UX designer with extensive experience in education and gaming app design projects.She works on plugin design and user workflow with both product and web engineering teams at Slate Digital. She enjoys using gamification strategies to facilitate usage in even the... Read More →



Wednesday November 16, 2022 12:20pm - 12:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

2:00pm GMT

Unit testing the audio processors
The benefits of testing software are well-known. It uncovers bugs, enhances the development process, and helps write high-quality code. However, getting started with testing frameworks and unit testing can be complicated. The goal of this talk is to cover setting up a unit test framework for a JUCE plugin and getting started with the framework. During the talk, some practical unit tests will be presented. These example test cases range from simple sanity checks to verifying the audio processor output.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Esa Jääskelä

Esa Jääskelä

Software Developer, Buutti Oy
I'm an embedded software developer by profession, a musician by passion, and a hobbyist audio programmer because I'm hoping to find a way to combine these two things somehow. Lately, I've been focusing on Linux, C++ and testing.


Wednesday November 16, 2022 2:00pm - 2:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

CHOC: Why can't I stop writing libraries or using backronyms?
You'd think that after spending most of my career from 2004 to 2020 writing C++ library code, I'd have managed to move on from doing libraries. Clearly not. This talk presents "CHOC" which is yet another C++ library that's hopefully going to be useful for a lot of audio developers.

CHOC is a free, header-only collection of C++ bits-and-bobs that began as a few classes that I stuck into a repo for personal use, but which has fattened-up into something that now seems worth presenting to the ADC crowd.

I'll try to make this talk interesting by showing how some of the seemingly obvious classes in CHOC actually represent years of hindsight and regret. Quite a few things in the library are my attempt to finally nail the design of functionality that I've implemented differently in the past, so I'll use this to illustrate the factors, thought processes and good coding practices that matter when writing good generic code.

If you want to know more about what's in CHOC, the github is here: https://github.com/Tracktion/choc

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Julian Storer

Julian Storer

CEO, SoundStacks
I'm the creator of Tracktion, JUCE and Cmajor


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
3) CMD 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

LEGOfying audio DSP engines
Audio DSP is at the heart of music software and digital hardware, yet it is often overlooked in various ways by product makers. Developing high-quality DSP algorithms is indeed error-prone, time-consuming, it requires deep expertise in computer programming and extensive knowledge in a large number of other subjects. This easily translates into substantial costs when hiring capable DSP engineers or somehow acquiring the required skills. Furthermore, it can be difficult to objectively communicate desired sound characteristics and quantify sound quality. No wonder digital products had a bad reputation for a long time.

While there are some peculiarities that make music DSP development unfortunately more complicated than other engineering tasks, we can still apply lessons learnt in other fields by tastefully adapting them to our case. I'll describe two potential complementary approaches: on one hand I'll draw a parallel between software libraries, object-oriented programming abstractions, and patching systems (such as Pure Data and Max/MSP); on the other, I'll discuss DSP programming languages in their compositional aspects. Code reuse, and thus modularization, will be the common theme, and it will be tackled also from a "cultural" point of view.

Hopefully this talk will help:
  • startuppers to use their budgets more wisely, avoiding taking shortcuts that lead nowhere or spending too much on custom DSP development;
  • companies with little internal DSP knowledge or few resources to catch sudden market occasions;
  • established companies to rationalize effort and commit resources where it does really make a difference.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Stefano D'Angelo

Stefano D'Angelo

CEO, Orastron Srl
I am a music DSP researcher and engineer, as well as the founder and CEO of Orastron. I help companies around the world, such as Arturia, Neural DSP, Darkglass Electronics, and Elk, in creating technically-demanding digital synthesizers and effects. I also strive to push audio technology... Read More →


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

3:00pm GMT

PANEL: Starting your first audio software company
What’s the best direction for you? Building products for clients or for end users? Or both?
- Key legal issues to consider as you start out – equity, IP, trademarks, licenses – and protecting your investment (IP and anti-piracy)
- Where do you get the initial money from?
- What should you build for my first product?
- Open-source alternative models
- Building your brand and your client list
- Learning from your mistakes and overcoming obstacles
- Optimising your marketing, and leveraging the different platforms that are available
- Building your team and identifying skills gaps
- How to set your rates – making sure you offer value whilst still being able to make a living!
- How to win investment and funding as you start to scale up
- Maintaining a healthy work/life balance

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Rebekah Wilson

Rebekah Wilson

CEO, Source Elements
Rebekah is the technical co-founder and CEO who co-created the entire suite of Source Elements software. With a degree in music composition and a lifetime of love for technology, Rebekah has focused her career as a composer, electronic music researcher, and software developer - with... Read More →
avatar for Heather Rafter

Heather Rafter

Co-founder & Principal, RafterMarsh
Heather Dembert Rafter has been providing legal and business development services to the audio, music technology, and digital media industries for over twenty-five years. As principal counsel at RafterMarsh US, she leads the RM team in providing sophisticated corporate and IP advice... Read More →
avatar for Adam Wilson

Adam Wilson

Software Developer, Code Garden
Adam is the founder and chief of Node Audio, where he supports clients and developers creating awesome audio software. Between managing teams, coding and growing his business, he also makes music, and is an advocate of microtonality. In 2022 he launched a plugin called Entonal Studio... Read More →
avatar for Christian Luther

Christian Luther

Founder, Playfair Audio
Christian is an audio DSP expert based in Hannover, Germany. In the past, he has worked in R&D with brands such as Access, Kemper Amps and Sennheiser. In 2022, Christian founded his one-person audio plugin company Playfair Audio.
avatar for Marius Metzger

Marius Metzger

Entrepeneur, CrispyTuner
My name is Marius, I'm 24 years old and have a passion for product design, leadership, and, of course, software development.After finishing school at 16 years of age, I got right into freelance software development, with Google as one of my first clients.In 2020, I released a pitch... Read More →


Wednesday November 16, 2022 3:00pm - 3:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Allocations considered beneficial: How to allocate on the audio thread, and why you might want to
Audio programmers have traditionally been warned away from doing memory allocations on the audio thread, for very good reasons. But that has forced companies to limit feature sets and write unnatural code.

What if we could allocate freely in our audio code, and even reap some performance benefits from doing so? How would that enable us to improve in our products and our code?

In this talk we're going to take a whistlestop tour of C++'s std::pmr namespace, and discuss how to use it in the low-latency environment of audio processing.

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Tom Maisey

Tom Maisey

Software Engineer, Cradle
Tom Maisey is a programmer who has worked in audio for more than ten years, including at ROLI and Cradle. He spends most of his time thinking about how to make audio development more interactive and fun.


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
2) AltTab 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

APPLE: Deploying v2 Audio Units as v3 Audio Unit Extensions
In this talk, we will demonstrate how an existing v2 Audio Unit can be deployed as a v3 Audio Unit Extension for added capabilities like iOS support and Objective-C / Swift compatibility. This can be done without modifying existing code by means of the AUAudioUnitV2Bridge wrapper class. First, we will show how to build a non-UI v3 Audio Unit Extension using the recently released AUGenericViewController for quick evaluation and testing. Then we will show how to reuse the Audio Unit's Cocoa UI on macOS, and finally how to add a new cross-platform AppKit / UIKit-based UI written in Swift.



IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Marc Boucek

Marc Boucek

Core Audio Software Engineer, Apple


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK

4:20pm GMT

Building the Accelerated Audio Computing Industry of Tomorrow: GPU Audio
What do the GPU Industry and Pro Audio have in common? For years, the answer was “not much” to just about everyone. But the dreamers kept dreaming, and quietly, a solution was being built for many years. In 2020 during the height of the pandemic - an entirely remote team of professional musicians, engineers, computer scientists and GPU architects came out of the shadows to begin an open dialogue with both industries, about how it was time for audio processing to get a serious upgrade. In this brief promotional talk, the co-founders of GPU Audio (Alexander Talashov and Jonathan Rowden) will share an overview of their vision of building an accelerated audio computing industry niche powered by GPUs, and invite the spectator to a part of that journey. 
What's been done as of today?
- GPU Audio Tech Stack
- First products: Early access and beta 
- First reveals: New products and features

What will we do in the midterm?
- SDK release (premiering a portion here at ADC Hands-On)
- Spatial Audio (Mach1 technologies collaboration)
- More integrations 

How do we define the future of this technology?

What is required to bring real-time accelerated audio computing for ML-powered algorithms on GPU?
- Machine Learning Frontend, Backend Implementation and API
- Developer Community to Launch by 2023

Growth and Opportunity
- Partnership Opportunities and Hiring
- Vertical integration: how GPU Audio is impacting audio broadly

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Alexander Talashov

Alexander Talashov

Co-founder and Managing Partner, GPU Audio
JR

Jonathan Rowden

co-founder and CBO, GPU AUDIO INC
Hello ADC community, my name is Jonathan Rowden and I am the CBO and co-founder of GPU AUDIO, a new core-technology company focused on unlocking GPU based parallel processing, for greatly accelerated real-time and offline DSP tasks. Our mission is to provide a new backbone of processing... Read More →


Wednesday November 16, 2022 4:20pm - 4:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK

5:00pm GMT

KEYNOTE: Incompleteness Is a Feature Not a Bug
If you’ve been in the music technology field for any length of time, you may have encountered the visual programming environment called Max. Maybe you’ve wondered what kind of people work on a computer program that doesn’t really seem to do anything?

In this talk David will share the unlikely story of Max’s transformative impact on both people and organizations, starting with his own life and that of his company Cycling ‘74. 25 years ago he was a reluctant entrepreneur with no real goals other than continuing to work on some cool software. Over time, he became more interested in exploring new ways of working, and realized that Max itself was an inspiration for the culture he was seeking as a software developer. Max's design and philosophy has allowed us to work as a fully remote team since the beginning with little need for planning and hierarchy. David will identify some properties common to both software and organizational architecture — many learned through trial and error — that seem to sustain creative flourishing of both people and teams. Some of these include learning to solve less than 100% of the problem, parameterizing interdependence and personal development, and building trust through innovation instead of rules. Finally, David will discuss some limitations of the Max approach and show how they’ve tried to address them in their most recent work related to code generation and export.  

IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev



Speakers
avatar for David Zicarelli

David Zicarelli

CEO, Cycling ‘74
David Zicarelli is a computer programmer and improvising musician who designs interactive software to support creative expression. He has been working on Max and Max-related projects since the late 1980s. Prior to his Max life he created one of the first graphical voice editors for... Read More →


Wednesday November 16, 2022 5:00pm - 6:00pm GMT
1) Ctrl 10 South Pl, London EC2M 7EB, UK
 
  • Timezone
  • Filter By Date Audio Developer Conference Nov 14 -16, 2022
  • Filter By Venue London, UK
  • Filter By Type
  • In-Person & Online
  • In-Person & Online - Remote Speaker
  • In-Person Only
  • Online Only
  • Workshop
  • Audience


Twitter Feed

Filter sessions
Apply filters to sessions.