Loading…
Tuesday, November 15 • 4:20pm - 4:50pm
Real-time audio source separation on iOS devices

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Source separation performance has seen tremendous progress in recent years thanks to advances in deep learning technologies. Its real-time setup would give musicians new ways to interact with music, e.g. in djing and music learning. In this talk, we will cover the challenges we had to tackle in order to provide our users with real-time low-latency source separation on iOS mobile devices.
We will talk about:
- state-of-the-art deep learning-based source separation algorithms
- model optimisation for iOS mobile platforms (Core ML and ANE)
- low-level integration for real-time use-cases


IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY: https://conference.audio.dev

Speakers
avatar for Adrien Ferrigno

Adrien Ferrigno

Audio Developer, MWM
Audio developer, computer scientist, researcher, iOS developer and musician with a crush on mix/mastering. Adrien joined the MWM adventure in 2015 and has worked on various audio projects since (edjing, Guitar Tuner, Stemz…). He has been in multiple bands, produced his own tracks... Read More →
avatar for Clément Tabary

Clément Tabary

ML Engineer, MWM
Clément is a deep-learning research engineer at MWM. He applies ML algorithms to a wide range of multimedia fields, from music information retrieval to image generation. He's currently working on audio source separation, music transcription, and automatic DJing.


Tuesday November 15, 2022 4:20pm - 4:50pm GMT
4) Shift 10 South Pl, London EC2M 7EB, UK