Shlomo Dubnov, Ke Chen, Kevin Huang Journal of Creative Music Systems, 2022, 1 Read full publication. Abstract: Generative musical models often comprise of multiple levels of structure, presuming that the process of composition moves between background to foreground, or between generating musical surface and some deeper and reduced representation that governs hidden or latent dimensions […]
Research
Retrieval Guided Music Captioning via Multimodal Prefixes
Nikita Srivatsan, Ke Chen, Shlomo Dubnov, Taylor Berg-Kirkpatrick Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}, Aug 2023, Jeju, South Korea. pp.7762-7770. Read full publication. Abstract: In this paper we put forward a new approach to music captioning, the task of automatically generating natural language descriptions for songs. These descriptions are useful both for categorization […]
Somax2 Workshop and Concert at Università di Pisa
Marco Fiorini has been invited to lead a workshop at Università di Pisa on 4 November 2024, showcasing REACH advancements in Music Improvisation with Co-Creative Agents. The workshop focused on Somax2, REACH co-creative environment for music improvisation, and saw a wide number of students from Università di Pisa interacting for the first time with it.A […]
Variation versus bouclage. L’improvisation est-elle soluble dans l’électro ?
Marc ChemillierFranck Jedrzejewski; Carlos Lobo; Antonia Soulez. Écrire comme composer. Le rôle des diagrammes, Éditions Delatour, pp.77-90, 2021, Musique/Philosophie, 9782752104267 Read full publication. Abstract: Les interfaces visuelles des logiciels musicaux sont des diagrammes dans un sens graphique, mais ce sont aussi des diagrammes dans un sens plus conceptuel car ils déterminent une certaine manière de […]
Le langage harmonique d’Hermeto Pascoal et son apprentissage par une intelligence artificielle
By Marc Chemillier, Jean-Pierre Cholleton. Read full publication. Abstract: Le langage harmonique d’Hermeto Pascoal et son apprentissage par une intelligenceartificielle. Conversation avec Jovino Santos Neto. Jovino Santos Neto a été le pianiste du groupe d’Hermeto Pascoal de 1977 à 1992. Depuis, il estresté le dépositaire informel du patrimoine artistique de ce grand musicien brésilien. A […]
The Application of Somax2 in the Live-Electronics Design of Roberto Victório’s Chronos IIIc
By William Teixeira, Marco Fiorini, Mikhail Malt, Gérard Assayag Musica Hodie, 2024, 24, ⟨10.5216/mh.v24.78611⟩DOI : 10.5216/mh.v24.78611 Read full publication. Abstract: This article details the development of a computational patch designed for real-time processing in the composition of Chronos IIIc, originally written by Brazilian composer Roberto Victório for solo cello. In this version, the piece incorporates […]
George Lewis is in the REACH team at IRCAM
Great encounters in IRCAM’s Studio 5 with George Lewis, Gérard Assayag, Marco Fiorini, Damon Holzborn and Hongshuo Fan talking about music technology, improvisation, composition and thinking about new interaction strategies between Voyager and Somax2. George Lewis will be part of the REACH team for the upcoming year, and we cannot wait to see what we […]
Deriving Representative Structure Over Music Corpora
By Ilana Shapiro, Ruanqianqian Huang, Zachary Novack, Cheng-I Wang, Hao-Wen Dong, Taylor Berg-Kirkpatrick, Shlomo Dubnov, Sorin Lerner. Read full publication. Abstract: Western music is an innately hierarchical system of interacting levels of structure, from fine-grained melody to high-level form. In order to analyze music compositions holistically and at multiple granularities, we propose a unified, hierarchical […]
ARTE Documentary on the Future of Music
Elaine Chew and Gerard Assayag are part of director Anna Neuhaus’ ARTE documentary on The Future of Music focusing on what AI means for music, featuring jazz pianist Michael Wollny and classical pianist Kit Armstrong. This segment was filmed in Paris at IRCAM, the Institut de Recherche et Coordination Acoustique/Musique. Elaine is joined by Gérard Assayag (ERC […]
Simultaneous Music Separation and Generation Using Multi-Track Latent Diffusion Models
Read original: arXiv:2409.12346 – Published 20/09/2024 by Tornike Karchkhadze, Mohammad Rasool Izadi, Shlomo Dubnov. Abstract: Diffusion models have recently shown strong potential in both music generation and music source separation tasks. Although in early stages, a trend is emerging towards integrating these tasks into a single framework, as both involve generating musically aligned parts and can […]