{"id":1150,"date":"2023-10-06T17:11:33","date_gmt":"2023-10-06T15:11:33","guid":{"rendered":"https:\/\/reach.ircam.fr\/?p=1150"},"modified":"2024-03-23T11:39:01","modified_gmt":"2024-03-23T10:39:01","slug":"new-generative-ai-transforms-poetry-into-music","status":"publish","type":"post","link":"https:\/\/reach.ircam.fr\/index.php\/2023\/10\/06\/new-generative-ai-transforms-poetry-into-music\/","title":{"rendered":"New, Generative AI Transforms Poetry into Music"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"1150\" class=\"elementor elementor-1150\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-ddd28ff elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"ddd28ff\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-dd604fe\" data-id=\"dd604fe\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-775f9a8 elementor-widget elementor-widget-text-editor\" data-id=\"775f9a8\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><a href=\"https:\/\/today.ucsd.edu\/story\/new-generative-ai-transforms-poetry-into-music\">Read original article<\/a><\/p><p>Artificial intelligence (AI) shows its artistic side in a new algorithm created by researchers from UC San Diego and the Institute for Research and Coordination in Acoustics\/Music (IRCAM) in Paris, France.<\/p><p>The new algorithm, called the Music Latent Diffusion Model (MusicLDM), helps create music out of \u201csound poetry,\u201d a practice that uses nonverbal sounds created by the human voice to inspire feeling. Composers and musicians can then upload MusicLDM\u2019s sound clips to existing music improvisation software and respond in real-time as an exercise to boost co-creativity between humans and machines.<\/p><p>\u201cThe idea is that you have a musical agent you can \u2018talk\u2019 to that has its own imagination,\u201d said Shlomo Dubnov, a professor in the UC San Diego Departments of Music and Computer Science and Engineering, and an affiliate of the\u00a0<a href=\"https:\/\/qi.ucsd.edu\/\" target=\"_blank\" rel=\"noreferrer noopener\">UC San Diego Qualcomm Institute (QI)<\/a>. \u201cThis is the first piece that uses text-to-music to create improvisations.\u201d<\/p><p>The team\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=1bx6PfbGuPk\" target=\"_blank\" rel=\"noreferrer noopener\">debuted<\/a>\u00a0their new AI-powered workflow as part of a live performance at the Improtech 2023 festival in Uzeste, France. Called \u201cOuch AI,\u201d the composition included performances by sound poet Jaap Blonk and machine improvisation by George Bloch.<\/p><h4><strong>Improvising with AI<\/strong><\/h4><p>Dubnov collaborated with Associate Professor Taylor Berg-Kirkpatrick and Ph.D. candidate Ke Chen, both of the UC San Diego Jacobs School of Engineering\u2019s Department of Computer Science and Engineering, to base Ouch AI on principles also used in popular text-to-image AI generators.\u00a0<\/p><p>Dubnov first trained ChatGPT to interpret sound poetry into emotionally evocative text prompts. Chen then programmed MusicLDM to transform the text prompts into sound clips. Working with improvisation software, artists like Bloch and Blonk can respond to these sounds through music and verse, establishing a creative feedback loop between human and machine.<\/p><p>\u201cMany people are concerned that AI is going to take away jobs or our own intelligence,\u201d said Dubnov. \u201cHere\u2026people can use [our innovation] as an intelligent instrument for exchanging ideas. I think this is a very positive way to think about AI.\u201d\u00a0<\/p><p>Ouch AI\u2019s name was partly inspired by the 1997 Radiohead album \u201cOK Computer,\u201d which uses Macintosh text-to-speech software to recite lyrics in at least one song. \u201cOuch AI\u201d is also an inversion of Google\u2019s \u201cOkay Google\u201d command, placing AI in the role of the one giving the prompt, while the performer responds through art.<\/p><h4><strong>Exploring Human\u2013Machine Co-Creativity\u00a0<\/strong><\/h4><p>Ouch AI and MusicLDM are part of the ongoing\u00a0<a href=\"http:\/\/repmus.ircam.fr\/reach\" target=\"_blank\" rel=\"noreferrer noopener\">Project REACH: Raising Co-creativity in Cyber-Human Musicianship<\/a>, a multi-year initiative led by an international team of researchers, artists and composers. REACH was funded by a $2.8 million European Research Council Advanced Grant\u00a0<a href=\"https:\/\/qi.ucsd.edu\/computers-in-a-jazz-ensemble-inventing-improvisational-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">last year<\/a>.<\/p><p>As AI becomes more intertwined with society, from language to the sciences, REACH explores questions of how its influence may change or enhance human creativity. Eventually, Dubnov says, he would like to see workflows like that behind Ouch AI applied to other forms of music and sound art, to encourage REACH\u2019s spirit of experimentation and improvisation.<\/p><p>For more information, visit the REACH website at\u00a0<a href=\"http:\/\/repmus.ircam.fr\/reach\" target=\"_blank\" rel=\"noreferrer noopener\">http:\/\/repmus.ircam.fr\/reach<\/a>.\u00a0<\/p><p>A recording of the performance \u201cOuch AI\u201d can be seen on\u00a0<a href=\"https:\/\/tinyurl.com\/ouchAI\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/tinyurl.com\/ouchAI<\/a>.<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Read original article Artificial intelligence (AI) shows its artistic side in a new algorithm created by researchers from UC San Diego and the Institute for Research and Coordination in Acoustics\/Music (IRCAM) in Paris, France. The new algorithm, called the Music Latent Diffusion Model (MusicLDM), helps create music out of \u201csound poetry,\u201d a practice that uses [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":1152,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[60,57],"tags":[],"class_list":["post-1150","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles","category-music"],"aioseo_notices":[],"blog_post_layout_featured_media_urls":{"thumbnail":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-150x150.jpg",150,150,true],"full":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628.jpg",1200,628,false]},"categories_names":{"60":{"name":"Articles","link":"https:\/\/reach.ircam.fr\/index.php\/category\/press\/articles\/"},"57":{"name":"Music","link":"https:\/\/reach.ircam.fr\/index.php\/category\/music\/"}},"tags_names":[],"comments_number":"0","wpmagazine_modules_lite_featured_media_urls":{"thumbnail":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-150x150.jpg",150,150,true],"cvmm-medium":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-300x300.jpg",300,300,true],"cvmm-medium-plus":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-305x207.jpg",305,207,true],"cvmm-portrait":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-400x600.jpg",400,600,true],"cvmm-medium-square":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-600x600.jpg",600,600,true],"cvmm-large":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-1024x628.jpg",1024,628,true],"cvmm-small":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628-130x95.jpg",130,95,true],"full":["https:\/\/reach.ircam.fr\/wp-content\/uploads\/2024\/03\/OueAI_1200x628.jpg",1200,628,false]},"_links":{"self":[{"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/posts\/1150","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/comments?post=1150"}],"version-history":[{"count":4,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/posts\/1150\/revisions"}],"predecessor-version":[{"id":1155,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/posts\/1150\/revisions\/1155"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/media\/1152"}],"wp:attachment":[{"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/media?parent=1150"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/categories?post=1150"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/reach.ircam.fr\/index.php\/wp-json\/wp\/v2\/tags?post=1150"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}