will fork the software as a basis for their research and the documentation explicates several
custom options to extend the code (shown here).
References
Amodei, D., Anubhai, R., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Chen, J., et
al. (2016). Deep speech 2: End-to-end speech recognition in English and Mandarin. In ICML
(pp. 173–182).
Araki, S., Nesta, F., Vincent, E., Koldovsky, Z., Nolte, G., Ziehe, A., & Benichoux, A. (2012).
The 2011 signal separation evaluation campaign (SiSEC2011): - audio source separation -. In
10th international conference on latent variable analysis and signal separation. doi:10.1007/
978-3-642-28551-6_51
Cano, E., FitzGerald, D., Liutkus, A., Plumbley, M. D., & Stöter, F. (2019). Musical source
separation: An introduction. IEEE Signal Processing Magazine, 36(1), 31–40. doi:10.1109/
MSP.2018.2874719
Chandna, P., Miron, M., Janer, J., & Gómez, E. (2017). Monoaural audio source separation
using deep convolutional neural networks. In Latent variable analysis and signal separation
(pp. 258–266). doi:10.1007/978-3-319-53547-0_25
Liutkus, A., & Stöter, F.-R. (2019, September). sigsep/norbert: v0.2.1. doi:10.5281/zenodo.
3386463
Liutkus, A., Stöter, F.-R., Rai, Z., Kitamura, D., Rivet, B., Ito, N., Ono, N., et al. (2017).
The 2016 signal separation evaluation campaign. In Proc. Intl. Conference on latent variable
analysis and signal separation (lva/ica) (pp. 323–332). Springer International Publishing.
doi:10.1007/978-3-319-53547-0_31
Manilow, E., Seetharaman, P., & Pardo, B. (2018). The northwestern university source
separation library. In ISMIR (pp. 297–305).
McFee, B., Kim, J. W., Cartwright, M., Salamon, J., Bittner, R. M., & Bello, J. P. (2018).
Open-source practices for music signal processing research: Recommendations for transparent,
sustainable, and reproducible audio research. IEEE Signal Processing Magazine, 36(1), 128–
137. doi:10.1109/MSP.2018.2875349
Ono, N., Koldovsky, Z., Miyabe, S., & Ito, N. (2013). The 2013 signal separation evaluation
campaign. In Proc. IEEE international workshop on machine learning for signal processing
(MLSP) (pp. 1–6). doi:10.1109/MLSP.2013.6661988
Ono, N., Rai, Z., Kitamura, D., Ito, N., & Liutkus, A. (2015). The 2015 signal separa-
tion evaluation campaign. In Proc. Intl. Conference on latent variable analysis and signal
separation (lva/ica). Liberec, Czech Republic, doi:10.1007/978-3-319-22482-4_45
Ozerov, A., Vincent, E., & Bimbot, F. (2011). A general exible framework for the handling
of prior information in audio source separation. IEEE Transactions on Audio, Speech, and
Language Processing, 20(4), 1118–1133. doi:10.1109/TASL.2011.2172425
Rai, Z., Liutkus, A., Stöter, F.-R., Mimilakis, S. I., & Bittner, R. (2017, December).
MUSDB18, a corpus for audio source separation. doi:10.5281/zenodo.1117372
Rai, Z., Liutkus, A., Stöter, F.-R., Mimilakis, S. I., & Bittner, R. (2019, August). MUSDB18-
hq - an uncompressed version of musdb18. doi:10.5281/zenodo.3338373
Roma, G., Grais, E. M., Simpson, A., Sobieraj, I., & Plumbley, M. D. (2016). Untwist: A new
toolbox for audio source separation. In Extended abstracts for the late-breaking demo session
of the 17th international society for music information retrieval conference, ismir (pp. 7–11).
Stöter et al., (2019). Open-Unmix - A Reference Implementation for Music Source Separation. Journal of Open Source Software, 4(41), 1667.
https://doi.org/10.21105/joss.01667
5