Deep learning is becoming more and more significant in all areas of science including audio processing. Yet a lot of people have a hard time understanding it or are too scared to start learning it altogether. Is deep learning really so difficult that you need a PhD to use it? Does it truly require huge datasets and gigantic computational clusters? Is it possible to deploy neural networks in real-time audio plugins? In this talk, I will show you how you can learn deep learning for Virtual Analog modeling of audio effects fast, for free, without a PhD, without any special equipment or loads of data, and deploy your deep learning model in an audio plugin.
What you will learn:
- what are the 4 biggest myths concerning deep learning?
- how to learn deep learning for audio fast in 4 simple steps for free
- where to find and how to synthesize a dataset to model your analog device of choice
- how to train your first deep learning model for audio using the basics of PyTorch and without a computational cluster
- how to deploy your model in a real-time audio plugin
The presentation will feature a live demo of setting up a deep learning pipeline and training a neural network for Virtual Analog modeling of a distortion effect.
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY:
https://conference.audio.dev