home.social

#musiclm — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #musiclm, aggregated by home.social.

  1. What? This is like hardstyle got a bit too personal with drum and base. #musicLM

  2. I mean, this is closer. Its got the kicks, sorta. #musicLM

  3. Why listen to #MusicLM when you have people like Kyle Landry, Frank Tedesco, Patrick Bartley (and more) live on the web

  4. (Though, in fairness, as long as you have "jazz" in the prompt, #MusicLM will give you something that sounds not unlike jazz.)

  5. Just started playing with Google's #MusicLM and it is mind-bendingly awful.

    With serious slog you can get to the beginnings of a riff a really good musician may be able to work with, but nowhere near worth the effort.

    (If somebody asks you to be a music prompt engineer, ask for a very large amount of money.)

  6. Watched a YouTube video on Google's #MusicLM earlier this evening and my jaw is still on the floor. AI is getting wild.

  7. #Google's #MusicLM that uses #AudioLM may have just changed the whole #TextToMusic #AI landscape. Without using any diffusion, MusicLM creates extremely high (24 kHz) #audio quality with consistent result that are jaw dropping. Probably the first working and direct text to music that is accurate and fully synthesized.

    youtube.com/watch?v=2CUKU2iAzA

    #GenerativeAI #ArtificialIntelligence #ArtGenerators #ArtGenerators #Music #ComputingHistory #ComputerArt

  8. #Google's #MusicLM that uses #AudioLM may have just changed the whole #TextToMusic #AI landscape. Without using any diffusion, MusicLM creates extremely high (24 kHz) #audio quality with consistent result that are jaw dropping. Probably the first working and direct text to music that is accurate and fully synthesized.

    youtube.com/watch?v=2CUKU2iAzA

    #GenerativeAI #ArtificialIntelligence #ArtGenerators #ArtGenerators #Music #ComputingHistory #ComputerArt

  9. #Google's #MusicLM that uses #AudioLM may have just changed the whole #TextToMusic #AI landscape. Without using any diffusion, MusicLM creates extremely high (24 kHz) #audio quality with consistent result that are jaw dropping. Probably the first working and direct text to music that is accurate and fully synthesized.

    youtube.com/watch?v=2CUKU2iAzA

    #GenerativeAI #ArtificialIntelligence #ArtGenerators #ArtGenerators #Music #ComputingHistory #ComputerArt

  10. Between #ChatGPT and #MusicLM (among others), it's going to be interesting to see what becomes of art and creative expression in general in the years to come.

    "Interesting"

  11. Images, text, and now music… generative #ai is here to stay and the debate on what place and role will take in our society is just at the beginning.

    Why? Because humans don’t think ahead… we can actually, but often with #tech we don’t. So here we are asking the question “now that we created this #technology, how do we integrate it and how do we use it right?” 🙄

    I can’t wait for generative pizzaiolo AI… ah, can you imagine the flavors on that pizza?
    😳🍕😬

    “An impressive new AI system from Google can generate music in any genre given a text description. But the company, fearing the risks, has no immediate plans to release it.

    Called MusicLM, Google’s certainly isn’t the first generative artificial intelligence system for song. There have been other attempts, including Riffusion, an AI that composes music by visualizing it, as well as Dance Diffusion, Google’s own AudioML and OpenAI’s Jukebox. But owing to technical limitations and limited training data, none have been able to produce songs particularly complex in composition or high-fidelity.

    #MusicLM is perhaps the first that can.”

    Read the full article on TechCrunch and please share your thoughts on the matter … the artistic copyright one in particular 👉 techcrunch-com.cdn.ampproject.

    #music #art #books #writing #ethics #philosophy #sociology #news #artist

  12. How does "The Scream" by Edvard Munch sound as music?

    What #ChatGPT is for text and speech and #Midjourney is for images, #MusicLM is for music. It composes music from images or text descriptions. The AI is fed with 280,000 hours of music.

    It has not been published yet, because there are significant copyright concerns.

    However, as a plugin for the DAW and if it generated editable separate MIDI tracks, I could imagine such an AI as a source of inspiration.

    #music #KI #sound

  13. #MusicLM - like #ChatGPT but for music. Some of these samples are rather creepy but it's quite impressive overall.
    google-research.github.io/sean

  14. Google’s new AI model creates songs from text descriptions of moods, sounds - Enlarge / An AI-generated image of an exploding ball of music. (credit:... - arstechnica.com/?p=1913289 #machinelearning #musicsynthesis #musiclm #biz#google #ai

  15. 🧠 Dopo #AudioML, arriva #MusicLM: un nuovo modello di #Google che permette di generare musica partendo da un prompt testuale.
    😯 L'algoritmo è multimodale, quindi anche in grado di trasformare una descrizione testuale e una melodia di base (es. canticchiata) in un audio strutturato.
    🎧 Gli esempi sono a dir poco sbalorditivi.

    🔗 MusicLM: google-research.github.io/sean
    🔗 Un approfondimento: alessiopomaro.it/algoritmi-gen

    #AI #TextToMusic #IntelligenzaArtificiale #multimodalità

  16. 🧠 Dopo #AudioML, arriva #MusicLM: un nuovo modello di #Google che permette di generare musica partendo da un prompt testuale.
    😯 L'algoritmo è multimodale, quindi anche in grado di trasformare una descrizione testuale e una melodia di base (es. canticchiata) in un audio strutturato.
    🎧 Gli esempi sono a dir poco sbalorditivi.

    🔗 MusicLM: google-research.github.io/sean
    🔗 Un approfondimento: alessiopomaro.it/algoritmi-gen

    #AI #TextToMusic #IntelligenzaArtificiale #multimodalità

  17. Wie klingt ein Gemälde?

    Nach #ChatGPT und #Dalle2 nun #MusicLM, das Text in Musik und sogar Bilder in Ton verwandelt!

    Beispiele hier ausprobieren ➡️ google-research.github.io/sean #KI #AI

  18. @simon I don’t think #MusicLM will put any musicians out of work. It is the same as the text generation programs in that it produces merely plausible patterns, devoid of meaning. Unlike the Chat machine learning programs, we can tell instantly that these examples are crap. Does this mean quality is easier to discern than truth? I hate to think of the samples the authors decided to exclude.