Research
Cyber-improvisations et cocréativité, quand le jazz joue avec les machines
Gérard Assayag, Marc Chemillier, Bernard Lubat 2023, pp.38-39. Read full article. Abstract: Pouvez-vous nous raconter votre rencontre et votre envie de travailler ensemble ? Marc Chemillier : Avec Gérard Assayag, nous avons commencé à concevoir des logiciels pour faire de l’improvisation musicale au début des années 2000. Gérard travaillait sur la simulation stylistique et explorait […]
Improvisio : towards a visual music improvisation tool for musicians in a cyber-human co-creation context
BySabina Covarrubias Stms. Journées d’informatique musicale, Micael Antunes; Jonathan Bell; Javier Elipe Gimeno; Mylène Gioffredo; Charles de Paiva Santana; Vincent Tiffon, May 2024, Marseille, France. Read full publication. Abstract: Improvisio is a software for musicians who want to improvise visual music. Its development is part of the REACH project. It is useful to create visual […]
Being the Artificial Player: Good Practices in Collective Human-Machine Music Improvisation
Article by Marco Fiorini (STMS – IRCAM, Sorbonne Université, CNRS) has been accepted for the 13th EAI International Conference: ArtsIT, Interactivity & Game Creation at New York University in Abu Dhabi, United Arab Emirates Read the full paper Abstract: This essay explores the use of generative AI systems in cocreativity within musical improvisation, offering best practices for […]
Preparing the Boulez Somax2 IRCAM variations for concert at Carnegie Hall with ICE
Levy Lorenzo from the International Contemporary Ensamble is in studio at IRCAM with Marco Fiorini, Gérard Assayag and George Lewis to work on preliminary Somax2 tests for the upcoming concert homage at Pierre Boulez that will take place at Carnegie Hall in New York on 30 January 2025, with the world premiere of Boulez Somax2 […]
Zero-Shot Audio Source Separation through Query-Based Learning from Weakly-Labeled Data
Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov Proceedings of the AAAI Conference on Artificial Intelligence, 2022, Remote Conference, France. pp.4441-4449. Read full publication. Abstract: Deep learning techniques for separating audio into different sound sources face several challenges. Standard architectures require training separate models for different types of audio sources. Although […]
Computational Auditory Scene Analysis with Weakly Labelled Data
By Qiuqiang Kong, Ke Chen, Haohe Liu, Xingjian Du, Taylor Berg-Kirkpatrick,Shlomo Dubnov, Mark D Plumbley. Read full publication. Abstract: Universal source separation (USS) is a fundamental research task for computational auditory scene analysis, which aims to separate mono recordings into individual source tracks. There are three potential challenges awaiting the solution to the audio source […]
A New Dataset for Tag- and Text-based Controllable Symbolic Music Generation
By Weihan Xu, Julian McAuley, Taylor Berg-Kirkpatrick, Shlomo Dubnov,Hao-Wen Dong ISMIR Late-Breaking Demos, Nov 2024, San Francisco, United States Read full publication. Abstract: Recent years have seen many audio-domain text-to-music generation models that rely on large amounts of text-audio pairs for training. However, similar attempts for symbolic-domain controllable music generation has been hindered due to […]
HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection
Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov ICASSP 2022 – 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2022, Singapore, France. pp.646-650 Read full publication. Abstract: Audio classification is an important task of mapping audio samples into their corresponding labels. Recently, the transformer model with self-attention […]
Improving Choral Music Separation through Expressive Synthesized Data from Sampled Instruments
Ke Chen, Hao-Wen Dong, Yi Luo, Julian Mcauley, Taylor Berg-Kirkpatrick, Miller Puckette, Shlomo Dubnov Proceedings of the 23rd International Society for Music Information Retrieval Conference, Dec 2022, Bengaluru, India. Read full publication. Abstract: Choral music separation refers to the task of extracting tracks of voice parts (e.g., soprano, alto, tenor, and bass) from mixed audio. […]