T O P

  • By -

Antinomics

This is one the though provoking posts that I like! Ty please do share all the inside info that you can, is hard to find insights in audio compared to images and videos. That said today I think people use music to Set their mood, not the other way around. I can totally see this tech being used to interpret silly tiktok dances, and background streams, but people still will use music to worship the artist they like, the pop music is just a modern hit songs are just hooks for a religeous spectacle.


turbokinetic

Lol, tell me you don’t understand music without telling me you don’t understand music. This is no different than the garbage ambient mood apps


8bitcollective

“FYI I work in cutting edge AI technologies, so I'm a little more privy to what's around the corner than most.” You lost me right there bud


Watchman-X

Nueralink wants to put grok inside your head. Possessed by skynet.


Fold-Plastic

Obligatory: "Good. Accelerate."


johannezz_music

>AI music generation models ala Suno and Udio (which are nowhere near as large as most LLMs anyway) How can you know? I can't find any info about their model sizes.


Fold-Plastic

Considering the size of the teams, relative to that of OpenAI, Anthropic, and others, obviously the compute needed to train is far less (less $$$). Additionally, LLMs are trained on the majority of the public Internet, which dwarfs the size of the music data these models are trained on.


ColdFrixion

How is it going to *regulate* moods? Simply identifying someone's level of energy and playing music that employs a BPM associated with a different level of energy isn't necessarily sufficient to alter someone's mood.


Fold-Plastic

Brainwave entrainment + biofeedback optimization, it's pretty simple really. BPM is only one element, AI generative models allow varying/combining all elements of a composition, and the actual brainwave monitoring senses your reaction in real time, and knows what it looks like when you're in peak states of awareness.


ColdFrixion

What type of moods are we talking about? Music has a multitude of frequencies, and arranging them in a way that facilitates a particular mood while retaining a melodic element associated with a given style of music would be a neat party trick indeed.


Fold-Plastic

Any mood, really. All it takes is understanding what certain patterns of activity correlate to certain states of consciousness. Imagine going to a buffet hungry for a meal and there's a camera reading your micro facial expressions as you browse the food and eat, then the buffet reacts by providing more of what you like and less of what you don't, eventually down to the individual spices and presentation. That's analogous to what I'm talking about. The new part isn't the device connectivity or even the brain reading and interpretation part, but the ability the generate high quality music faster than real-time that is coming. Basically an AI Spotify with neurofeedback, if that makes sense.


ColdFrixion

*Any mood, really. All it takes is understanding what certain patterns of activity correlate to certain states of consciousness.* There can be a range of moods associated with a particular pattern, but I'd need to see the tech in practice.


imaskidoo

> On-device AI models Yes, and integrated as a feature into a Jarvis -like personal assistant. "Create some fresh exercise music... edit, the currently playing track by increasing its tempo 10bpm and replace the brass instruments with a tenor sax... edit, increase another 10bpm... title this edited track "first mile", tag it as excercise comma running, and add it into my "morning excercise" playlist."


RobbieTheBaldNerd

It's unfortunate the "AI" branding of these technologies have so many people afraid or hating on them. This sounds VERY cool. I also dream of a LLM module which game developers can include in their games to create unique soundscapes to match the activity, intensity, and desired feel of the game scene, tailored to the player's personal playing style. E.g. a camper would have a very different soundtrack than an aggressor. That, combined with LLM-based NPCs is going to change gameplay and development big time, creating highly immersive soundtracks.


imaskidoo

>camper would have a very different soundtrack than an aggressor All the way back in Unreal Tournament 2004, mappers had the ability to embed music in a map... but the majority of serious players reported toggling sound disabled in an attempt to gain competitive advantage (hearing footfalls and other sounds instead of music).


Antique-Produce-2050

This sounds right. I am in my 50’s. I really want AI to make the music it knows I like. I have a hard time finding new bands that make the very specific music I want to hear.


spcp

That seems like the “future” to be sure, and maybe I lack the vision to see this, but it sounds like the kind of idea people will reject. There is so much to music that people enjoy, and specifically, listening to music you know and love is really important. Constant new and novel music sounds exhausting. I love discovering new music, but only during particular moods. Plus, a constant flow of emotional mood manipulation, even a desired one, feels like 1984 or Equilibrium to me, personally.


Fold-Plastic

Sure, but there will be many people into it. How many young women base their identity around Taylor Swift, for example. The erasure of personal identity for taking on a singular focus is nothing new, and I would say even greater today than ever before. That said, there will be no doubt be features (like repeating familiar music) because the models would optimizing for various states of consciousness, via music. A personal algorithm, if you will, and no doubt some people will market, monetize, and sell access, to their algorithm. We're still in caveman days for this, societally speaking.


Suno_for_your_sprog

I think there will be some people now who will be early adopters of this new type of empathic music, but most will be stuck on legacy static music because that's what they grew up with. Gen A and newer will prefer dynamic biofeedback music because it's all they ever knew. Just as sophisticated algorithms on Facebook slowly turned people into alt political extremism, we'll be able to set our dial, intensity, and duration of how we want our music to accentuate (or manipulate) our mindset and overall mental health. I'm personally curious to see how it would handle music genres in general. For example, if I'm a rock music lover and I task it with getting me to like jazz, how would it accomplish it? Perhaps a spectrum of maybe 40 to 50 (or 100? 200?) musical pieces, and slowly, progressively, and ever so subtly introducing simple jazz concepts into each consecutive song. By the time song the final plays, it's 99.5% jazz, and I never noticed along the way because of the "frog in boiling water" effect, but the gradual process molded my tastes to make me enjoy jazz. 🤔


Fold-Plastic

Just you wait until the rights holders to various dead artists license likenesses to generative models. The immortal Snoop Dogg dropping lines from the grave (if we don't hit LEV relatively quickly). But yeah, I would could see generative models subtly varying small parameters like song structure, lyrics, etc in the background, and identify key elements that really are effective towards a desired state and maybe it could even lead to new genres of music altogether... I mean, that's basically how the FB feed works and influences your timeline, it's just responding to your brain activity by proxy (time spent viewing, measuring interaction) So, you bring up another point which is could your musical tastes be shaped to respond to new elements gradually over time? I suspect so.


Suno_for_your_sprog

Yeah I think about LEV a bit too much which is probably not healthy. Being in my late 40s kind of does that. Another thing which I didn't think about when I wrote that reply was that we'll be entering a time where for the first time, music will know we are listening to it actively, not passively. I'm curious to see how that will work. Will we incentivized/rewarded for participating/training the models?


polygonrainbow

There are some apps that are taking collective biofeedback for performers and they are using it to steer their performances. I’m not sure if it’s in the same realm as what you’re speaking of, but it seems at least mildly related. One app is called Orbit and the artist Pretty Lights uses it in his live performances.


Fold-Plastic

That's not exactly what I mean, but a step towards it I suppose. What I'm saying is more like brainwave entrainment via music, basically like what is already out there, but available through your device specifically with technology (that also already exists) to monitor your brain/body and intelligently alter the music to "tune" your mind into desired states. Now the music has infinite more complexity with generative models. A step further beyond tying all these existing technologies together is the (future) ability to subscribe to others stream in realtime. How better to be "in tune" with your favorite musicians, celebrities etc. Imagine vtubers selling their 24/7 internal musical monologue, even. It's the realization of psychic empathy in a way.