Audio software running in “headless” or remote contexts, i.e. without access to a tightly integrated GUI, is increasingly common, either when running in embedded devices, on a remote cloud server, or distributed over a local network where remote/automated control is desired.
The parameter controls exposed over plugin API’s are insufficient, since the practically usable implementations only support a fraction of the variety necessary. Developers expose many additional controls over the GUI, which doesn’t translate to headless or remote uses.
Plugin GUI’s can be as involved as a fully-fledged DAW, exposing complex interactions. Although MIDI 2.0 will in the medium term replace some of what we discuss, even with full adoption, it doesn’t cover the many interactions possible from a GUI.
In this article we discuss three basic Distributed Systems patterns, for controlling audio software during run-time over a network: simple socket messaging, request/response, and publish/subscribe.
We also demonstrate their implementation using the OSC and gRPC frameworks, discussing challenges and best practices specific to real-time audio.
Grounding the above, we provide a pair of ready to use, fully fledged open-source applications implementing our suggestions, both available to download.
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY:
https://conference.audio.dev