Episode 238: Oleg Stavitsky: Revolutionizing Music and Wellness with AI-Driven Soundscapes
LISTEN TO THE EPISODE:
Scroll down for resources and transcript:
Oleg Sevirsky is the visionary behind Endel, an innovative company at the intersection of AI, music, and wellness. With a focus on personalized soundscapes, Oleg is redefining how AI can support creativity, focus, and relaxation for users worldwide. Under his leadership, Endel creates adaptive sound environments that resonate with users' needs for productivity, sleep, and relaxation, positioning the company as a leader in merging technology and artistic wellness.
In this episode, Michael Walker and Oleg Sevirsky dive into how AI can elevate creativity in music and support artists navigating the digital age. Together, they explore groundbreaking ways AI enhances productivity, personal wellness, and community for today’s musicians.
Takeaways:
Discover how personalized AI-driven soundscapes can improve focus and well-being
Learn strategies for community-building and monetization in the streaming era
Understand why AI is an enhancement tool for creativity, not a replacement for artistry
free resources:
Tune into the live podcast & join the ModernMusician community
Apply for a free Artist Breakthrough Session with our team
Tune in to learn more :
Transcript:
Michael Walker: YEAAAAH! Right. Excited to be here today with my new friend, Oleg Seversky. So I'm recording this from a slightly different setup. We're officially launching our Modern Musician Studio in Orlando, Florida. And we've got an exhibition. He's here today, he's helping to build and design the studio. So if you're used to watching me with like the normal studio backdrop and the gear, that's just some quick context for what we're doing, but really excited to connect with Oleg.
So Oleg is the CEO and co-founder of Endel which is at the intersection of technology, art, and wellness. They offer AI powered soundscapes to reduce stress, improve sleep and enhance productivity and there's a really interesting cross section between neuroscience and AI and music and wellness.
And so I'm excited to catch them today on podcast. Talk a little bit about where things are currently at as it relates to AI as a tool set and how you can use it to enhance your productivity and your creativity as a musician to be able to help impact more people and be more creative. So, Oleg, thank you so much for taking time to be here today.
Oleg Stavitsky: Thank you for having me.
Michael: Absolutely. So maybe, for anyone, this is the first time connecting with you. Could you share just a quick introduction and a little bit of a story about how you started your company?
Oleg: For sure. And you know, it shouldn't be just me. I mean, we shouldn't be just talking about me because I have six co-founders. I'm one of the six co-founders, which is pretty unusual, right? Like we're like a band almost.
Michael: There's six of us in my original band that I started with.
Oleg: There you go. Red Lake. And these people are like my second family.
This is not the first company that we've built together. Before Endel, we actually built a company called BUBL, B U B L, and it was a digital art for kids’ apps. Kind of a brand that taught children the correlation of color, form, and sound. So one of the co-founders is actually an established neoclassical composer himself, kind of shared the stage with Niels Fromm, released on Deutsche Grammophon.
So, yeah, he comes from that world. The idea for BUBL was that we would essentially build apps that were in many ways inspired by Brian Eno's Bloom app. I don't know if you've tried that, like he's released this app on iPad years ago. So, I think it was one of the first iPad apps that I downloaded on my iPad one.
And in this app, you could just kind of, tap around the screen at random and it would just create, generate, this beautiful ambient kind of a composition in real time. It was not for kids, but my kids love this app, and that was a major inspiration for BUBL where we have built apps where kids would play around essentially with some sort of a digital, beautiful, minimalistic framework, but the result would always be a musical composition that would be generated as a result of that play in real time. So, sound and this idea of generative composition and kind of creating a framework where you don't directly control what's happening, but you kind of play around with parts of the system, and this results in a composition that was always a big, kind of obsession of mine, if you will, even before and all so sorry. But then this company BUBL got acquired and the 6 of us were immediately like, okay, what's next?
And we're talking 2017 here, so we were looking at all of this so-called functional wellness playlists on Spotify. And they were like taking off, right? Like this is now… I mean, everyone's talking about wellness music and wellness playlists. It's a big, big market, but back in 2017, it was just starting.
And I was like, look, people are using sound and music to help them feel better, right? To help them achieve a certain cognitive state. I think there is a product to be built here. And with that, that's how Endel was born. We were just like, okay, let's do this.
Michael: Amazing. Maybe for some context, for anyone who isn't familiar with Endel, could you share like an overview of exactly how the app works and what it's designed to do?
Oleg: Yeah, so we initially… once we had this idea, we started talking to neuroscientists and we were asking them, okay, “How do we build technology that would help people feel better by utilizing the power of sound?” And very quickly, we've realized that it's going to have to be personalized and adaptive.
It cannot be just one song or one composition, one playlist for any situation. Like it has to be personalized. So we were like, okay, and it has to be personalized to like your heart rate, time of day, your circadian rhythms, your chronotype. And we were like, “Ooh, I guess we're going to have to build an AI for this.”
But again, we're talking 2017, 2018. No one is talking about AI in general. And specifically, no one is talking about generative AI as applied to music. It was just not a trend at all. But we had to build this technology in order for the science to work, right? So, we build this tech and then using this tech, we build this product; the app where essentially, we choose a soundscape designed for a specific function, be it for focus, concentration, relaxation, sleep, and meditation. The sound starts playing, a soundscape starts playing, and that sound is generated in real time, personalized to you, and it listens to inputs like time of day, weather, heart rate, movement, your sex, your age, your chronotype, all of that.
So all of that information goes into the algorithm, and it creates a soundscape in real time, and it also adjusts. The output to the changes in those inputs, like your heart rate changes, it reacts to that. The weather changes, it reacts to that as well.
Michael: That's so cool. So how do you do that dynamic like inputs? Is it like through something like an Apple Watch or how do you measure some of those inputs?
Oleg: Yeah, exactly. It's through wearables. We actually won the Apple Watch App of the Year award back in 2020. And we integrated it with other wearables as well. And yeah, your phone actually has a lot of that data, so we just ask for permission to be able to access like your location, your movement, and then we react to that in real time.
Michael: That's so cool. So, it sounds like what you really set out to do is create personalization in terms of the soundtracks that are designed to use music as a foundation. It has always been about changing someone's state, like emotional state. And so, what you've designed is a way to be able to tap into someone's intended state.
They want to focus… like if they want to be productive, they want to meditate, if they want to sleep and create the music that's personalized to them, it's going to help them attune with that state.
Oleg: Except we don't call it emotions or moods, like we don't deal with that. We deal with what we call fundamental cognitive states, like focus, relaxation, or sleep.
Michael: Interesting. So, what's the current status of… we were talking backstage a little bit before about the interview that we did recently with Nolan Arbaugh, who was the first Neuralink patient to have a neural interface, where the interface connects with his thoughts, and you can look at a screen and you can move a cursor on the screen just by thinking about it.
And when I hear about something like what you're describing, my first thought is like, man, if we were able to combine that with like a neural interface, that was literally just like real time feedback. That's super interesting. And I'm wondering based on your perspective and everything that you've built and kind of where you've also seen how rapidly AI development has evolved.
Obviously it's kind of having its moment in the sun right now, but it's been happening for long before right now as well. So, I'm curious from your perspective and everything that you've built, where you think things are headed for music artists in particular. There's a lot of indie music artists right now that are interested in pursuing and building a career and being creative.
And there's a lot of fear and concern around these tools and potentially being replaced by AI. And so, I'm curious to hear your perspective on the state of generative music and as it relates to these trends around personalization as well. How does an artist prepare, set themselves up, to best utilize the tools that are available and to not be swept away in the tsunami?
Oleg: I think… I mean, obviously the best way to do that is just be open minded and just go and try all of the tools that are available today. Just play around with them. For example, even the Suno and Udio. I know a lot of pretty high profile producers who are using them right now. These tools to at least create drafts of their songs or the so-called out painting where you drop kind of an unfinished composition and let it kind of finish it for you and just listen to some ideas of how… for example, like you have a hook, but you need an intro or you have an intro and you actually need like a verse or something like that.
So, it can be done right now and it's not good enough, I think the output is not good enough at this point to be like, okay, yeah, this is the song, you should just download it and there you go. But it can give you some directions and ideas and so you can really use it just as a tool these days. I really don't think that music, kind of handmade music artistry, is going away.
I really don't think that. I think what we're going to see are new platforms. AI generated music are going to emerge for sure, where people can generate kind of meme songs about breakfast. That is going to happen and maybe, someone's going to be able to generate a soundtrack to whatever… like their bike ride.
That can also happen. But I think ultimately, I don't want to say human creativity, but like this, I think it's very, very important for people to feel connected with a person, with like someone's personality. And I don't think that's going away. I think what will get disrupted big time and is being disrupted as we speak is the so called production music, business, soundtracking and all. I mean, music that is created for YouTube videos or ads or stuff like that, that is being massively disrupted as we speak, for sure.
This whole business is going to change fundamentally, but at the end of the day, someone is going to have to enter the prompts; listen to what the AI generated and tweak and mold the output. And again, you need someone with a good ear to be able to do that.
So while the business and the way this music is created is going to change, I think the role of a musician is going to change. The profession is not going to go anyway. I guess that's what I'm saying.
Michael: Super interesting. Yeah. So, it sounds like what you're saying is that, for one, you'd recommend to don't close off or be afraid of these tools and try to avoid using them, if anything, like having an open mind and starting to explore and playing around with these tools, getting familiar with them, is going to help you to be able to extend your creativity.
And you don't feel like there's some use cases that are absolutely being disrupted right now… where it's not really necessary that there's a name brand or recognition, or there's a creative art. It’s just used for like a commercial or for a soundtrack. There's some implications there, but that's… maybe the role of a musician is going to be pivoted a bit moving forward so that it's more about the creative process itself and being a curator of it so that you can use these tools. And potentially, even people that have never learned to play an instrument, but are great music listeners and they have a good ear and they know what they want… that's a role like the music, the musicality of it, isn't going anywhere. It's just the tools that are available to us to be able to quickly create that high quality musicality or are getting more and more accessible.
Oleg: That's my take. Yes. Yeah, that's what I fundamentally believe. There is a whole other conversation to be had about what's going on with the world of streaming. That was challenging, even before any of that, right? And it's only going to become more challenging, regardless of whether it is here or not. So, I don't think it's kind of… I don't think we need to necessarily blame AI for the fact that… I know for a fact… we talked to a lot of artists, big and small everyone, except for like the 0.01 of artists is struggling to make any money in streaming, but that's a whole separate conversation.
And I think just what's happening is, AI is just one of the many issues that artists are frustrated about. But it's not the main issue. I think they should be frustrated about it, frankly.
Michael: Man, that's a great point. Yeah. So, what you're mentioning is that, AI sort of can amplify or aggravate the already existing issue of monetization issues with streaming. Because, I mean, in part, like one of the biggest issues with streaming feels like is just that there's so much music and it's not enough attention to stream and it's being spread out and it's not paying well enough for the vast majority of artists to rely on that as their main income stream.
And it does seem like AI further aggravates the problem because now there's even more music and it's even easier to release, but that's not necessarily an issue inherent with AI itself, but really kind of the underlying model of how it's being monetized. Well, we don't need to go down a rabbit hole here with this specific part of it but part of what we're building with Modern Musician is a software as a service that is sort of like if Patreon, Discord, TikTok and Notion all had a baby, then it would be like this platform. And it's really designed to help address some of the main issues right now in terms of community building and monetization for artists and creators. So that's something I'd remiss. Not to mention that, that’s like the core issue that we're dedicated to helping solve with this platform and with our software.
But you're speaking my language, everything you're saying is very aligned in terms of how I see things happening right now, kind of where they're headed. And I'd be curious to hear your perspective on, as it relates to… I mean, that maybe that's the issue and like monetization and right now, if you're not able to really monetize streaming, what do you see as some of the biggest opportunities right now for music creators or artists? To actually start to cultivate a community and start to monetize their music.
Oleg: That is a billion dollar question, I would say, right? And we're seeing all kinds of models. I think we're seeing kind of the beginning of the great unbundling of the streaming world. I think more and more artists are going to… I mean, it's not like Spotify is going away anywhere, right? But more and more people are gonna see it as almost like a marketing platform for their art. And yeah, community building and super fans are kind of the name of the game right now. And what I'm seeing other artists are doing, they're actively looking for ways to monetize outside of the streaming ecosystem. If you look at what James Blake, for example, is doing with this product called Mold, where he puts a lot of his unreleased music into behind this paywall and artists, sort of see his super fans are paying him directly to access.
He's kind of remixes unreleased like besides like all that stuff. That is one model for example, right? So I think building a community of super fans around you and being able to offer them something that they would pay for directly. I think that's the way to go. But again, you know, I'm not a musician.
What we're doing is adjacent to the music industry, but I'm not an artist manager. So this is, I guess, kind of uneducated to sense on what I'm seeing out there.
Michael: Makes sense. Yeah. And I'm for an uneducated perspective that deals, very on point. So maybe to bring things back into your wheelhouse a little bit more, to like productivity and music and productivity, but also just like AI and productivity in general. I'm curious, what you would recommend as some of the biggest opportunities for artists to explore as it relates to using AI.
And I, even as I asked this question, I know like these things are changing so fast. I mean, just like yesterday opening, I just dropped a new model with much better reasoning capabilities, so maybe a better question to ask is more of like a fundamental mindset approach to like how in a world where technology is evolving so rapidly and the tools are available are evolving so rapidly… how do you set up your environment? In a way that allows you to quickly adapt and be resilient and be able to make use of those technologies.
Oleg: Well, I think it really comes down to kind of having an open mind and just following both, for example, Suno, Udio and others, they have their discord channels. I would absolutely be there and just looking at the latest models that they're putting out there. There's a few open source models.
Like I think being able to train or fine tune a model is a very important skill for a musician right now. But just to explain what I mean here… obviously, there's the so-called foundational models, right. And then, that have been trained on. Millions and millions of hours of data, and there's this whole lawsuit that is happening right now that is trying to determine whether the data set that they used was properly obtained. Right? And number one, a single musician cannot just do that at home, right? It requires massive data sets and massive computational power to be able to do that. But then there's the so-called fine tuning process where you are able to train a model on your… you would have like a Michael model, right?
And you would use a foundational model. But then on top of that, you would take all of the music that you have created and train a model who can just do what you do, right? Like it's not going to be able to do music in any style, but it's going to be able to create music in your style and being able to do that and knowing how to do that.
I think it's super important. And I think for musicians to be able to train on a genre or an artist specific… like what I'm saying is a fine tuned model designed for a very, very specific task is a super important skill and actually, all these AI companies are actually actively looking for musicians who are able to do that right now, right?
Like… this fine doing process really involves a lot of listening to model output and being like, no, that's bad. Yes, that's good. No, that's bad. Yes, that's good… right? So just to familiarizing yourself with tools, of how to do that, I think it's super important and then you're going to be able to use that model in your work but also, you're going to be able to monetize that skill beyond what you're doing. I mean, there's going to be a lot of opportunities and really, I think that skill is what's really important right now.
Michael: That's super interesting. I've been doing a lot of exploration as it relates to software developments and using AI. To help create AI integrations and whatnot. And I've dabbled with the concept of fine tuning with… we primarily use open AI API, but we've explored some of the other ones as well.
It seems like there's some great options and there's some that you kind of choose between the different models as well. But if I'm understanding you correctly, what you're saying right now is that one of the things to kind of pay attention to is over time, as much as possible, getting familiar with the process of fine tuning your own model.
And yeah, when you say that, are you referring mostly to developers who might have the ability to actually use the API? So, like, create their data sets and like and then encode it into a platform. Or do you think as well, for artists who maybe are looking for more of a no code solution, there'll be more accessible ways for them to like fine tune their own model moving forward.
There might already be some tools that are trying to solve that through the issue. But yeah, when you say fine tuning, what do you have in mind exactly? Do you like… all of the above or mostly kind of like the no code solutions?
Oleg: All of the above, right? Once you do that… if you are able to code… I mean, you musicians, music artists, who are able to code are the rarest kinds of people out there. People are, seriously, this is if you have those 2 skills combined in 1 person. You're going to do great, right?
Like these kinds of people are very rare and everyone wants them on their teams. So, that's one, if you do not know how to code. Again, just be able to use some of the no code tools and just be able to just collect the data set, tag it properly, just by data set. I mean, your data set, like your music, properly kind of structured and tagged and cleaned up and just be knowing how to feed that to a foundational model and be able to train like a more specific model… that is super important skill. I think that can be monetized in so many ways.
Michael: Well, thanks for sharing that. You certainly like piqued my interest in terms of what we're building and I've been interested in learning the fine tuning API and structuring our data with the software. The service that we're developing, we have data sets through which we can fine tune the model and in particular, for like the digital marketing use cases that we're using for like we have some like lead qualification systems for like application scoring and if we were able to use something like that, we already have a bit of like a prediction model for how the applications answered, that will score the lead and kind of pass that back to the meta conversion API pixel.
But it is a little bit of a hard coded thing where it's based on the specific questions right now. That's how we did the scoring initially. And then we made it more of a blanket, like, here's the questions, here's the answers, based on this, what would you estimate and asking like AI.
But I think the next step for us is actually doing like a legitimate fine tuning model that's like, “Hey, here's all the applications. Here's the results of which ones were a good fit. What was the lifetime value of those based on that?” Like, we want it to go back to that prediction model and actually integrate that fine tuning model to improve it.
So, I've been like kind of playing around with these ideas and I think this conversation might have been what I needed to just like, over the weekend for fun, take a real crack at exploring some of those fine tuning capabilities. So, thank you for sharing that.
It's also funny. You mentioned that the cross section of coding and music musicality is sort of like a, uh, a powerful cross section. I mean, that's basically like our, our team is like, you know, a bunch of very smart, musicians for also developers, at least as it relates to the software as a service.
So, could you tell me a little bit more? I mean, the platform you guys have built is so cool, but can you talk a little bit more about some of the main benefits and use cases that you've seen people using the platform for? If anyone here is listening to this right now and they're interested in exploring, just like using it for themselves, for their own productivity.
Oleg: Yeah, the 3 main use cases are focus, concentration and sleep. We have a lot of users who have been diagnosed with ADHD, and that has been… they would swear in their life that Endel helps them. And really, it does work. We know this. Not only do we know this, we've proven this in a peer reviewed proper scientific experiment, and we put out a white paper that was peer reviewed by an independent reviewer, and it was published in Frontiers of Computational Neuroscience, which is a very reputable scientific journal, and it proves that listening to a real time personalized adaptive soundscape is more efficient for concentration, for your attention, versus listening to a static playlist or an album. So, concentration is a big one. That's what I'm saying. Whether you have ADHD or not, that's one again obviously. If you're composing music, you're not going to listen to anything. You’re in it.
So that's maybe a bit hard, but when you're doing some admin work, that is just perfect, right? To keep you in the zone. If you have to just kind of churn out 10 emails, this is perfect for that. Another one is sleep. Sleep is huge, a massive use case.
A lot of people use it for sleep. It helps you fall asleep faster, but it's designed to play throughout the night. So, it also helps you kind of not wake up during the night. And we have some sleep soundscapes that we have designed with some amazing artists like Grimes and James Blake.
So, there’s that. I would say, focus and sleep are 2 of the main use cases. Then the other one, another big one… is relaxation. So, people that have that to struggle with anxiety, for example, that's a big one. A lot of people use it for that. It's kind of an SOS button when you really feel overwhelmed.
It does help you. That kind of activates your so called parasympathetic nervous system, and it brings your blood pressure down, just relaxes your muscles, things like that. So, yeah, but focus and sleep. If I was to say, what are people using it mostly for? It's those two use cases.
Michael: Awesome. About that, are there any like meditation soundscapes and what are the use cases for that?
Oleg: Yeah, of course, there is a meditation soundscape. Lately we've been kind of experimenting with all kinds of interesting kind of fringe use cases. Like there's one for sex, for example, that we have designed with a sex sound designer, believe it or not.
Michael: Viagra.
Oleg: Yeah, exactly.
Yeah. So, there’s that. We have been working with an ADHD… he's an influencer guy, has a million people following him on TikTok and Instagram. We have been kind of working with his community on developing some solutions specifically tailored for people with ADHD.
And one thing that we did was a noise soundscape which is crazy, right? There's literally where you're able to combine all kinds of colored noises, like being a brown noise or black noise. There's a lot of YouTube videos where you can just listen to brown noise for 8 hours. You don't need an app for that. But what we did is we actually did kind of an interactive soundscape where you get to combine different frequencies into kind of one noise and people really respond to that. But it's literally like you're listening to noise. That has been a bit extreme, but a lot of people really love that. Binary bits obviously is a big use case as well.
Michael: Awesome. Yeah. I'll have to explore some. We're considering getting a float tank here at the bottom musician studio. And so I'm guessing some of the soundscapes would be a great fit for that kind of environment.
Oleg: A hundred percent, a hundred percent, yeah.
Michael: Awesome. All right. Well, man, this has been a fun conversation about the state of AI and in humanity, but also specifically designed for musicians. How can we… in a world that's changing so quickly, have the right mindset so that we can be adaptable and know what to focus on.
I personally am feeling inspired by some of the concepts you shared around fine tuning as well, and applying some of these models to our own specific use cases. So, thank you again for taking the time to come on here and share some of the wisdom and lessons that you've learned.
And for anyone that's interested in learning more about Endel and essentially signing up for the app, where would be the best place for them to go to connect more?
Oleg: You can just go to the app store and just search for Endel, E N D E L. Or you can just go to endel.io and there's obviously a website that talks about the science and technology behind it. Why we built what we built. There's a manifesto in there that I actually wrote that talks about why we're doing what we're doing. But just try the app.
Michael: Well, like always, we'll put the links in the show notes for easy access. But Oleg, thanks again for being on the show. It was fun talking to you today.
Oleg: Thank you. Thank you for having me.
Michael: YEAAAH!