Episode 147: The Creative Process, Music Ownership, and the Future Evolution of AI in Music Production with Mr. Bill


LISTEN TO THE EPISODE:

 
 

Scroll down for resources and transcript:

Mr. Bill is an electronic music producer, and DJ from Sydney, Australia (currently residing in the USA) best known for his willingness to share tips in studio techniques & massive library of technical know-how with other aspiring producers. With a huge catalog of released original music (including many collaborations and remixes), Mr. Bill has worked with established artists such as deadmau5, Ganja White Night, Subtronics, Dirt Monkey, Virtual Riot, Infected Mushroom, and more. Mr. Bill runs the educational website mrbillstunes.com which houses 1,000+ hours of educational video content teaching the music production software Ableton Live, and he hosts “The Mr. Bill Podcast”.

Mr. Bill, one of electronic music's top educators, joined us in this episode to discuss the implications of artificial intelligence in the music industry and other philosophical topics.

Here’s what you’ll learn about: 

  • Exploration of AI's role in music production and how AI-generated content can serve as a creativity stimulus rather than replacing human creativity.

  • The value of references and comparisons in music production, learning from existing music, and incorporating one's unique artistic touch.

  • Speculation about the future evolution of AI in creative fields, including music production, and its ongoing collaboration with human creativity.

Bill: One thing I hear people say a lot about AI, and, sorry, I'm going on a bit of a tangent now, but they're like, oh, but if AI can write music for us, then we're fucked. Right? But like, how many times in your life have you heard a song where you thought I wouldn't change anything about that song. That song is perfect. Like to me, that's happened like barely ever. Like it happens barely ever, maybe a few times in my life. So I feel like anything that AI generates, at least anything that I've heard it generate at this point, it really just sparks ideas for me.

It makes me go like, oh, that gives me an idea. I would change that and that, and then I would build this other section off it and stuff like that. So it's like a creativity stimulus generator basically.

Michael: It's easy to get lost in today's music industry with constantly changing technology and where anyone with a computer can release their own music. I'm going to share with you why this is the best time to be an independent musician and it's only getting better. If you have high-quality music, but you just don't know the best way to promote yourself so that you can reach the right people and generate a sustainable income with your music, we're going to show you the best strategies that we're using right now to reach millions of new listeners every month without spending 10 hours a day on social media. We're creating a revolution in today's music industry and this is your invitation to join me. I'm your host, Michael Walker.

All right. I'm excited to be here with my new friend, Mr. Bill. Mr. Bill is an electronic music producer DJ from Sydney, Australia. He is currently in, in the USA though, and he's best known for his willingness to share tips in studio techniques and a massive library of technical know-how. He's worked with some established artists that you might've heard of, like deadmau5, Ganja White Night, Subtronics, while also touring the globe extensively and playing many major music festivals, world renowned venues like Red Rocks Amphitheater and Hampton Coliseum.

And he runs a company called or he runs an educational website called MrBillsTunes.com has over a thousand hours of educational video content teaching the music production software Ableton Live. And he has an awesome podcast called The Mr. Bill Podcast. So I'm really excited to have him on here today and have a discussion about AI music production, the future of the music industry.

Watching some of your videos, Bill, I can tell that you're someone that has a lot of experience in the music industry and also a lot of, you know, experience in the software side of things as well and with, you know, everything that you teach with Ableton Live. So I think you have a unique perspective and I'm looking forward to geeking out with you a little bit about the future of the music industry.

So, yeah, I know that was kind of a long intro, but thanks for taking the time to be here today.

Bill: Yeah, of course. Thanks for having me. I appreciate it.

Michael: Cool. So, maybe to start out with, you could just introduce yourself briefly for someone someone's watching this right now and they haven't connected with you before.

Could you share a little bit about your story and kind of how you, you reached a point where you're able to work with artists like Dead Mouse and Tore World one?

Bill: Yeah, so as you said, my name's Mr. Bill. I've toured a lot, I've done a lot of work with a lot of people and I continue to do a lot of work. Yeah, I don't know.

Basically I like, I started out as a guitar player playing in metal bands when I was younger. And then I got into electronic music in Australia. I was going to like s trans parties and stuff like that. And I thought, Hey, this is cool. Like, it's kind of got a lot of the same traits as metal, but it's like much cleaner and much tighter.

'cause obviously it's all produced. So I was like, this is awesome. I love this. So I like wanted to learn how to make it. So I started messing around with different daws. First was GarageBand and the second was Something else. And then I found like FL Studio and then eventually I found Ableton and I was like, oh, this one makes the most sense to me.

It's the most intuitive. And without having like any prior knowledge as to how audio signal flow works or like what is MIDI or what is a V S T or what is even audio represented as in, in the digital world, like all of, I had nothing. I didn't knew, knew nothing about all of that stuff, but I was just intrigued and able to, just felt natural to begin with.

And then, After I'd been using Ableton for a number of years, I decided I'm gonna go to university and get an education in audio engineering. So I actually understand all of this stuff at a more fundamental level. So I went to s a E in Sydney, which is I don't know what s a e stands for, something Audio Engineering School of Audio Engineering.

I don't know. So I went there. They have campuses all over the world, but I went to the campus in Sydney. Which was in Surrey Hills at the time and did a bachelor's degree over two years which is pretty quick, I think, for a bachelor's degree. And then after that by the end of that I was I had put music out through that time when I was going to university and by, and I had also been putting tutorials out ti things that I would learn at college.

I would instantly go home and be like, this is like applicable to this thing in Ableton. And I would make a video about it and like, show people. So. Yeah, that's basically how I got started. And then by the time I'd finished college, I was doing a couple of shows here and there in Sydney and I was you know, already starting to collaborate with other people and work with other people.

No, no, no big people. Just friends and stuff. And I had a bunch of other friends who were using Ableson too, so we would like kind of learn off each other and, and all that stuff. And it just kind of spiraled out into where I am today, I guess. Hmm.

Michael: Cool. Yeah, that's A similar trajectory to where I started as well. But you're probably about a thousand times more talented than I am when it comes to like Ableton. But I, I remember starting with GarageBand and, you know, eventually finding a way to Ableton. And it's, gosh, it's, it's amazing. It's amazing what you can do and, and right now, and maybe we can, we can venture into this as part of our discussion now, but some of the AI tools and the technology that they're starting to.

Yeah, become available. I'm a little bit disconnected from the live production scene, so I don't know how it's impacting live production and Ableton specifically. But I'd definitely would be curious to hear your perspective on what's currently happening right now around AI in the music production space and where you think it's going.

Maybe even before we get to that point though I know at this point, you know, you've, not only have you done this for yourself and you've really mastered the art of music production in a lot of ways but you've created over a thousand hours of tutorials and education and, and worked with, you know, countless other artists to help them with their music production.

So I'm curious, you know, from your perspective now, having worked with so many artists what are some of the biggest reoccurring patterns or challenges that you see artists struggling with when they first come to you?

Bill: Do you want me to answer both questions? The AI one and that one, or just one?

Michael: Let's start, let's start with the second one. We'll, we'll come back around. We'll plant the seed for the AI and, and come back to that one.

Bill: Sure. Recurring patterns for, for growing producers that I see. Uh hmm. Let's see. One thing I've noticed a lot when I look at project files from people who are just starting out, Is they often get, like constipated is what, what I call them constipated project files where I like to make any musical change becomes really difficult to do because of the way that they've set everything up.

So I think it's important to find a way that you can set your session up so that when you have a musical idea or you want to change something in the mix, it's possible to do that without having to go through like 10 extra steps. That's one thing I notice. Another thing I notice is, I mean, to be honest, like.

The difference between somebody who's like making professional sounding stuff and somebody who's not is just the amount of hours of practice. It's the same with anything. It's like I can play guitar and then, you know, ly Stein can play the guitar and, and we're both playing the guitar basically the same.

But the difference is like he just has this extra like dimension of finesse and, you know, skills that I don't have to play like faster and all this kind of stuff. So yeah, I think a lot of the stuff that I see beginners doing is the same stuff that I do, but it's just that I do it in a certain way that makes it sound like me.

Mm-hmm.

Michael: Hmm. That, that totally makes sense. Yeah, and I mean, I know the 10,000 hours rule is sort of like a guideline around you that doing anything worthwhile, like it takes time to invest,

Bill: I think even more than 10,000, to be honest. Like I would say at this point, I, I don't even know. I haven't encountered, but I think I've logged a lot more than 10,000 hours and I'm still learning something every day.

I feel like the further I get into this shit, the more I feel like I'm not a master at it.

Michael: Yeah. And, and that's such a important mindset. I, I think that that you're embodying that it seems like this is a reoccurring pattern around people who are the most successful, is that they have this level of curiosity or this level of like, beginner's mindset where they're willing to learn and acknowledge that, you know, they're, that there's always more, there's always a deeper, you know, level you can go to.

Bill: Also like a level of honesty, I think about the work. Like, I remember when I was younger, I would make something and it would be like half-assed and not that good. But I would listen to it and I'd be so excited about the fact that I made it, that I would like convince myself that it was really good and I had no ability to like judge it against other stuff and hear it as worse or better, you know?

And I think like the more you get into it and the longer you do it for, the better you are at being honest about whether or not something is good. And the more you realize that even though you put a hundred hours into a thing, it doesn't necessarily make it good. And I don't know, you just get a little bit more like, normalized in that way.

I think where that makes sense

Michael: That totally makes sense. And it actually brings up something that I'd be curious to hear your thoughts on. 'cause like the two ideas you just shared have a little bit of like, juxtaposition in that. You know, mentioned, you know, someone could spend a hundred hours working like endlessly on one project trying to like, tweak a mini thing and without it really making a big difference.

Whereas, but at the same time, like a level of mastery requires time and commitment. Mm-hmm. And like energy. So it does seem like, you know, the way that you spend that time makes a huge impact in terms of the leverage that you get kind of out of it. And as it relates to what you just shared with.

Yeah, having honesty to be able to compare your work to, you know, other pieces of work and, you know, judge objectively the quality of it. I'd be curious to hear your process and kind of how you think about using references or how you think about comparing, you know, your work to other pieces of work, which can be a double-edged sword, which, because on on one level it's like, yeah, like it's great to reference other things in order to improve your own.

Your own quality, but also where, where do you draw that line between just comparison, like being yourself versus like, you know, over overly comparing.

Bill: Yeah. Well, I think musically and also mix wise and mastering wise and stuff, you have to always make comparisons because I mean, you do make comparisons whether you accept it or not.

Like putting a track next to another track and looking at it is just a strong way of making that comparison. But either way, like the music that you're making today is based on influences that you've had in the past, so you're comparing it to like, ways that you felt and ways that you heard in the past anyway, whether you accept that or not.

But yeah, I think it's important to, to obviously do things your own way and be creative and stuff. But it is important, I think to also like stay within some guidelines that, you know, work. And that, you know, people enjoy. 'cause at the end of the day, I mean, you, you make music for your own reasons.

And one of the big reasons that I make it is exploration and just having a fun time. But another reason I make it is 'cause I wanna make people feel good. And I want to feel good when I show it to people on the dance floor and they feel good and then everyone's having a good time. Like, that's the, that's one big reason.

And so, to some degree, I do wanna stay safe within those guidelines of knowing that when I show somebody something that it's probably gonna be, they're probably gonna have a good time. Your YouTube algorithm does this too, you know, it's like, it shows you videos that it thinks that you're gonna have a good experience with, so you stay on the platform.

And I think like that, that's also important in music. But yeah, as for references with mixed downs, I personally think there's a pretty objectively good mix down to have like a, a lot of people are like, oh, it's subjective, that bit of distortion. There is an artistic choice. I personally don't think so. I mean, I, I, there's a, but obviously it is subjective, but there is a subjective thing that I like like in a mix down, and to me that's like the objective thing that I'm trying to reach when I get to a mix down.

So references are really important for that to look at on graphs and, you know, see if you're hitting the same metrics as another track. And also obviously just listening and seeing if it's like feeling somewhat in the same range. Mm-hmm.

Michael: Yeah, super smart. It definitely seems like there's a superpower in being able to stand on the shoulders of giants or be able to learn from, you know, the people who've already invested tens of thousands or more hours working on their stuff and be able to kind of, yeah, glean some insights from that investment that, that they've made while also learning you know, who you are and learning how to add your own unique, you know, sauce to it.

Bill: Yeah, I think it's inevitable that you add your own sauce to it for a few reasons. Like one, your ears are just built different to everybody else's. They're unique in the shape of the canals and the pins and like all that kind of stuff. Secondly, obviously everyone has different influences.

Everybody has heard and gelled with music differently. Your listening environment plays a big part of it. Like if somebody's riding on headphones, they're probably gonna make something that sounds really good on headphones, but may not, not sound necessarily good on speakers or vice versa. I. There's so many things, also your DAW and the tools that you have available.

Like maybe you have different plugins available to you than somebody else. Maybe you have different sample packs available to you than anybody else, and every one of those tools is influences you along the way. So it's like this giant fractal of you know, in in differences that, or differences rather, that, that will sort of inevitably make your music yours.

I feel like. And really at the end of the day, You just want to feel like you have ownership over it, right? Because like you can generate music now with AI, but I find the problem with it is when I generate it, even if it sounds like me, because I train it on my music, I'm still like it doesn't feel like mine.

Like I don't have that sense of ownership. I have that sense of ownership more if I train the model on my stuff, because some of my, like I put some work into it, but if I just generate something off, like MusicLM, that I didn't train anything of mine off or something. Then I never feel like I have that sense of ownership and that becomes a problem for me.

I can't like, resonate with a piece of music that is mine if I don't feel that sense of like, ownership over the work. But one, one thing I hear people say a lot about AI, and, sorry, I'm going on a bit of a tangent now, but they're like, oh, but if AI can write music for us, then we're fucked. Right? But like, how many times in your life have you heard a song where you thought I wouldn't change anything about that song. That song is perfect. Like to me, that's happened like barely ever. Like it happens barely ever, maybe a few times in my life. So I feel like anything that AI generates, at least anything that I've heard it generate at this point, it really just sparks ideas for me.

It makes me go like, oh, that gives me an idea. I would change that and that, and then I would build this other section off it and stuff like that. So it's like a creativity stimulus generator basically. So, yeah, I don't know. Anyway, bit of a tangent. Mm.

Michael: It's super interesting. Yeah. And a, and a great segue to the AI part, part of the conversation.

You know, you one thing that I thought was interesting as you described it was how, you know, really, like all of us, whether we're aware of it or not, you have our influences and we have our, you know, underlying data set that is comprised of our role models and our experiences and maybe our, you know, our epigenetics.

And that reminds me of, you know, the way that AI models work and how these AI models are trained based on other, you know, data sets and create a seemingly original, you know, piece of of work. It's really interesting. I love the, the perspective that you just shared around, and I think I've heard other people talk about this too, about like ai, like RA is really just raising the floor.

So it's, it's not It's, it's not, at least in the short term. Maybe we could talk about this too, about like the singularity, about like longer term re obviously a lot of this is like reading the tea leaves, but,

Bill: Well, another, like a point to that, like, oh, I'll talk about it like in the future, right? One, one idea that I heard was somebody was talking about airplanes.

And they said, like, if you thought about it like in the eighties when we start, or whenever we started making airplanes, I don't even know when it was, but like when you saw the first airplanes, you're probably like, oh man, imagine in 2023 they're probably gonna have flying cars and like fucking all sorts of crazy shit.

But like, basically like airplanes. They just got to a point and then they stopped advancing too much after that because we got them up to the utility that we needed them to be at to make the world function at that level. And then we stop developing 'em as hard, you know, and like maybe up with ai, we're just on that exponential part of the curve right now.

Maybe. I mean, AI is obviously different to airplanes, but who knows? I mean, it's possible that like, it might just get to a point where it makes the world function a lot easier and takes away a lot of the shitty jobs about like, making music and a lot of the shitty jobs of accounting and like all these like shitty jobs that no one wants to do can like remove those.

And basically put us at a higher operating level. And then maybe it'll stop being developed as much, you know, who knows? But we, we also taught it about humans, attached it to the internet and taught it how to program itself. So, That's also possible problem. You know, like you shouldn't ever teach AI how to write code that's just a, and then it, and then connect it to the internet.

That's just a bad idea.

Michael: Yeah. And that's like the primary use case I have for it right now. It's like using it to help write code and it's pretty dang good. It's a lot

Bill: better. I know, right? It's crazy. So imagine if it starts writing code, you know, to do all sorts of malicious shit. Yeah.

Michael: Yeah. That's probably one of my bigger concerns with AI is, is I mean, I don't know.

I don't know if I necessarily buy into the idea that like, AI is gonna be like inherently or in intrinsically like, evil or it's gonna wanna like harm things, but,

Bill: Well one of the points that I heard that Elon Musk made is like, for instance, if there's, like, if we're trying to build something, right, like a building and there's an anthill on the property that we need to get rid of to build the thing.

Yeah. We don't really think about the anthill. We just knock it over and we build, I. Yeah. And we don't like, have much empathy for it or give a shit. We're like, fuck it, it's an an he'll. Get it outta the way. Let's build. And he, he, he was saying he thinks AI might be similar to that. It's like, it's not necessarily gonna be evil against us or malicious, but if we're just in its way, you know?

Yeah.

Michael: I think that's a great analogy. Because I think most people, unless you're like. I don't know, a, a traumatized person, you're probably not gonna like, wanna torture ants or kill ants just because, you know, it's not really like I'm just gonna like, murder these ants, but, right, that's a good point.

Like, if it's the way that we think about them is just a different level of magnitude of in intelligence and mm-hmm. We, you know, and there, there is something to say about, you know, higher levels of intelligence. You know, I, I think most people, you know, aren't gonna, you know, cry themselves to sleep if they slap a mosquito and kill it.

But like, you know, you did literally just murder an intelligent, you know, some level of intelligent being. But yeah, I mean, if AI does, if it reaches that point where it's, you know, equivalent to our current relationship to ants or mosquitoes, then I think we have some pretty big, you know, I don't know if issue is the right word, but I mean, like, I think we've missed the mark quite a bit as it relates to like, aligning with AI or, or potentially.

Merging with, you know, AI so that we're not left behind. Okay. We're, we're going there. We're, we're, we're gonna, we're gonna go, go Geekie. This is my favorite type of conversation I have. So I would love to hear your perspective on, on AI as it relates to that concept of, you know, do you think that it's going to reach a point where it makes like, It's orders of magnitude smarter than humans.

And

Bill: I in a way think it is already right Because like yeah, if you think about the smartest person, you know, why are they smart? It's 'cause they, they've read a lot of shit and they remember a lot of shit and they know how to apply that stuff to other stuff. And you're like, well that guy's really smart.

Like anything I ask him about, he knows something about like, you know that 'cause he is read the Wikipedia of it and he remembers it or something. Right. And AI is like that on steroids already. So it's like, it depends how you measure intelligence really.

Michael: But it doesn't have a soul though.

Bill: Well, do you measure intelligence on how soulful somebody is?

'cause in that case, like, you know, the Dalai Lama is the smartest person or something.

Michael: No, that's, that's a good point. I mean, I, I think it's, it's one of the biggest, you know, components of this AI human conversation is it's starting to bring up, you know, like elements of, does it. Is it conscious or does it have a soul?

Or what does it mean to have a soul? Do we even really know exactly what a soul is? Or do we understand fully what consciousness is and what you know? What does it mean? Yeah. Like how do we know?

Bill: We're just like, if it is, we're just composited of like a bunch of like atoms and whatever. And any information that we have and any free will that we feel like we have is just shit that we feel like we have, you know?

We don't actually tangibly have anything inside our brains. So I think like for AI, even if it doesn't feel the same way, like it still has all the functional information and bits that we have as well. I think other than the fact that we feel like we have free will and we feel like we have a soul.

'cause you know, we can't prove those things. So we, that we all feel like we have it. So we think it's there and we think it's real. So yeah, I don't know. I mean, like, do we even need that part of us? I, maybe we do for survival or something. I don't know. Yeah,

Michael: it's, it's super interesting, you know, and I've, I've seen some studies that were, you know, exploring the concept of free will and whether we have free will.

And one of the ones that comes to mind was they had like brain scanners attached to people, and they had two little joysticks and the, the bottom line of the study. So they're supposed to like click on one of 'em once they made a decision. Yeah, yeah.

Bill: The FMRI study. Yeah. And they, they had made the decision in their head like seven seconds before or something.

Michael: Yeah, like the, the machines could basically tell which decision they were gonna make a few seconds before they were conscious of which decision they were actually gonna make. Mm-hmm. Which the implication was that that our perception of free will is an illusion because, You know, we make these decisions in a lot of cases based on our underlying kind of subconscious drivers.

And then our conscious mind, our prefrontal cortex comes in afterwards and says, I did this and I, and rationalizes it and I, you know, I did this because of X, Y, Z. When there's another study I saw that was really interesting that I think it was people, for whatever reason, like they had to have their two hemispheres disconnected.

I think it was maybe related. They were having seizures and so they had

Bill: You listen to Sam Harris, don't you?

Michael: I've heard a few of his a few of his talks. I, there's another contact that recommended him recently and yeah, I, I read one of his books. I, I know that he's like, you know, big into this space of like free will and

Bill: yeah, he talks, he's talked about both of these things a lot, like the free will thing and then how people have the.

The brain split for the epilepsy thing, and then they start like rush. Yeah. Doing weird shit with their hands and stuff. Y

Michael: yeah. That, that part, I, I think that's, that's so interesting. The, yeah. The eps, so they have, the hemisphere is cut into two and then they, if you, like, if you cover your, I don't know, I am probably gonna butcher this, but it's, the idea is still true.

It's just the exact specifics. Mm. But it's like if you cover one of your eyes that's attached to like the other hemisphere and then show someone like a picture of. A shovel and, and then ask them like, you know, pick the first the first thing that comes to your head and they're like, dirt. And then if you ask them afterwards, like, why did you pick dirt?

Then they're gonna try to rationalize. It'd be like, well, you know, I was walking today and I, you know, I saw like dirt. But like, the truth was like they actually you know, like they saw the shovel in their other, in their other hemisphere. And so like, that's the reason why, but we rationalize and we sort of come up.

We, we create, we weave a story about why we decided to, you know, to do the thing.

Bill: So yeah, I'm reading about it now. Apparently, like when one split-brained patient dressed himself, he would like pull his pants up with one side and, and down with the other 'cause that side did not want to go dressed. And like apparently, he also reported to have grabbed his wife with his left hand and shaken her violently, at which point his right hand would had come to her aid and grabbed the aggressive hand.

Wow. And also apparently yeah, if you, if you close one of their eyes and show them an image, the, they can't tell you what the image is. They saw, even though they saw it with like, 'cause they saw it with the, the other side that isn't connected to like, I don't know, something.

Michael: Wow, that's so interesting.

Yeah. I think that that was part of it, right? It was that like one was like the linguistic center and one of 'em was like the visual center. Mm-hmm. But yeah, bottom line, it, it does seem like there's been enough scientific, you know, studies and, and discoveries around this, this idea that what we view of like, if certainly feels like there's a self here and there's, you know, an entity that's making decisions, that's kind of like pulling the joysticks and deciding what am I gonna say or what am I gonna, what's my next thought gonna be?

But you know, anyone who's listening to this right now, who's you know, meditated before, has had the same experience where it's like that thing is just running on its own. Like, the mind just like, keeps going and like you can't mm-hmm. Like, it, it just, it just happens. It's like kinda like breathing.

Bill: Yeah.

Meditation's an interesting one. Sam Harris is, he also talks about this a lot and he has a saying, which is or something he said, which I thought was funny, is when he starts meditating, he feels like his brain is being hijacked by the most boring person alive.

Michael: That's funny. Well, I mean maybe we can link that to like, the AI discussion a bit. So, so it sounds like what, from, from your viewpoint, you know, consciousness is something that like, to a certain degree, like the AI already is conscious. It's conscious and or it's in a different way to, you know, what we describe as like human consciousness, but it's sort of made up of the same.

Energy or the same atoms and intelligence. And, and maybe it's, that's related to like the fact that it can, you know, it has patterns of, of recognition or it can learn. What are your thoughts on, have you heard of Neuralink, you know, Elon Musk's company Neurolink that he's making? Yeah. Do you, what are your thoughts on, on that whole angle and, you know, maybe we can like, I mean

Bill: I, yeah, I probably wouldn't get version one, but like, Maybe after like, you know, a few versions when like it's, you know, people are starting to use it and it's not buggy anymore and all that stuff, then yeah, maybe I'll give it a shot.

Seems pretty cool. Like, I mean, we're he, he makes a good point when he says we're already cyborgs. It's just that our bandwidth is like extremely limited. You know, like we, we already have all the information. We have a want, like at our fingertips on our phone. We're able to press a few buttons on our phone and make a pizza, come to our house.

We're able to like, you know, do anything from where we're standing, like we can just press a few buttons and make a car, come to us and pick us up. It's like if you could just do all that mentally and have like unlimited bandwidth basically, or like crazy amounts of bandwidth, like way faster than obviously from eyes to phone to back into the brain to making a decision on the phone or whatever, then yeah, I mean, I could see some value in that.

It would just speed everything up and again, make us operate on a higher level than what we currently are. But yeah, I mean obviously putting something into your brain is pretty scary, so probably wouldn't get version one. Like imagine getting version one and then somebody just like hijacks your brain 'cause it's got like some security floor or something and then they just control you.

That's wild. Like I imagine hackers in that space, that'd be crazy.

Michael: Yeah, that's probably about one of the most terrifying experiences you could imagine, is being like literally hijacked by someone else.

Bill: And yeah, like if somebody, you know, hacks your computer and all of a sudden it's got malware on it, it won't restart.

It's blue screening all the time, all this shit. Imagine if that happened to your brain, dude, that'd be horrible.

Michael: Yeah, that would be, and just like, yeah, I'm just imagining. I mean, that's probably like a Black Mirror episode at some point if they haven't made it, like at some point that'll be a Black Mirror episode.

Right.

Bill: Yeah, they just released some new episodes that I haven't watched yet, so maybe one of them is that, I don't know.

Michael: Ah, cool. Yeah, black mirror's interesting. I I really like the the areas that explores, 'cause I'm, you know, a fan of sci-fi. Yeah. And also I think it's good to, I don't know, be aware of potentially dangerous or negative like outcomes so that we can avoid.

Doing the things that, you know, that that lead to it. But I also have noticed this effect where it's like everyone thinks that the world's gonna be awful in the future because like all the future technology that, that they've described to show all the worst, you know, cases and the worst, you know, pieces of our humanity.

So, what, what's your, what's your viewpoint in terms of, are you a relatively like. On the optimism scale of, do you think the future is gonna be a lot better because of technology or do you think that it's going to be worse or you know, what's your perspective on where it's going?

Bill: Stephen Pinker has a study on this and he's like, the world is getting better by every metric.

It's just, we feel like it's getting worse for some reason. And I feel like it's getting worse because all of the things that we used to be able to ignore, 'cause it wasn't in our face on the internet 24 hours a day. We're worrying about like, everything now, and it's just a lot of load for a human brain to take on.

But by every metric, like poverty, hunger, all of this stuff, like general happiness, general, like all of this shit, I don't know. You'll have to check out his study. He, he says it's getting better. So I don't know. I probably just believe him, but also, I don't know, like, who knows? Maybe the future will be crazy 'cause of nukes or something.

I, I don't know. Huh.

Michael: Yeah, that's, that's true. Like it's, I know that's one of the concerns or one of the big trends or movements around AI is, you know, making sure that there's AI safety or alignments that we can avoid a worst case scenario. But it is interesting that effect of why does the world seem worse now than in the past, even though it's getting so much better?

And I, I, maybe I saw the same study, but, but the explanation that I saw was that like, like you mentioned, like we have more access to the news and so we see the global, like things that used to, we would never have any idea if something bad was happening, you know, across the world. And now it's like we can take the worst pieces, the worst news, and when people die and like horrible things happen.

And that's like what we're experiencing every day because we're just taking the worst pieces.

Bill: And, and that's the only thing in the news, right? 'cause it's the stuff that gets hits. You know, they're not like talking about how like some new building that just has like, Hundreds of homeless people came up or like anything of this, like always the headline is like, this guy's getting sued by this guy.

This bunch of people died in a plane crash. Like all this kind of shit. Because that's the stuff that makes people go like, whoa, and like wanna read it.

Michael: Yeah. And that could be an interesting use case for, you know, sort of like biohacking or hacking our brains to a certain extent because, you know, our brains, it seems like part of the reason that that's what the news has become.

Is because what? What bleeds, leads, and our brains are just like designed to hyperfocus on risks. Or dangerous. Yeah. We're like, we're like for good.

Bill: We're morbidly curious. We're all morbidly curious. We're like curious as to the worst shit. But also I think like for instance, like I've been getting mosquito bites my whole life, right?

So have you, but if all of a sudden we read a study on like mosquito bites are like this much of a percentage. Gonna cause you like this risk or that risk and like you can get this kind of disease for it. Like all of a sudden you're like mega paranoid about mosquitoes, whereas before when you didn't know, it's like the ignorance is bliss saying, you know?

Mm-hmm.

Michael: Absolutely. Yeah. I've, the way that I've heard it described is you maybe from like evolutionary psychology, that it was, the reason that our brains focus on, you know, risks or dangers is because in the past, especially, It was really important that if you noticed a cheetah or you noticed like something dangerous, that like, that was a higher priority to focus on.

So you could run away, you could get away from it, you could be safe and you could survive. Versus, you know, if you, I don't know, found a, I. A tree with a bunch of apples. Like that's a good thing to focus on too. But you know, you're probably gonna focus, you're gonna prioritize the thing that is like, hey, there's a giant cheetah that's lurking around the corner and might eat you.

Because that's gonna have a more immediate, urgent sort of effect on your ability to survive.

Bill: Yeah. There's definite remnants of that trait left in our brain for sure. Probably not to the level that it was when we were worried about cheetahs, but it's still there for sure. It's gonna take a while to.

To have that completely get out of people's brains. And I mean, there is still dangers, you know, like cars mm-hmm. Pretty dangerous. Natural disaster is still a danger. All that kind of stuff. Yeah. Yeah.

Michael: It seems like one of the one of the things that has remained true, or it's like becoming even more, becoming even more aware of with the evolution of AI and with technology is how intelligent our brains are.

And how, how much bandwidth, how much power there is to do things like that we wouldn't even necessarily think of as difficult to do or like things like a baby can reach out and grab a blueberry. I. And training a computer to like reach out and grab a blueberry is like crazy difficult. I think that they're approaching a point where they can do that ESP, if you like, program it specifically to like, this machine is designed to grab blueberries, they can do it.

But kind of having that like general intelligence to be able to like that a baby has to be able to reach out and grab a blueberry seems like it requires a huge amount of processing and like, and neural, you know, power and. Yeah, like we're approaching you quote unquote general intelligence it seems like with with ai.

So once it reaches that point, then maybe we can bring it back to the implications for like, the music industry and like creating music and using, you know, like a Neuralink or using AI as a tool. To be able to express, you know, your creativity. I'm curious to hear your thoughts on what's your perspective kind of right now?

I know we're kind of in this hype cycle where AI is, you know, it's, it's changing the game in a lot of big ways. There's, you know, a lot of hype around it right now as well. And I'm curious if you've seen any, like totally game changing music production, ai applications and I'd be curious to hear your thoughts on, you know, if someone is listening or watching this right now and they're interested in exploring, you know, this new development technology, but they're not really sure where to start how do you recommend they kind of surf the wave as it comes out?

Bill: So two things that I've found in AI that are very useful already is creative stuff like stim stimulation for an idea. Like, there's been a lot of times where I've gone into the studio with the idea that I wanna write a song, and I'm like, all right, where do I start? Like, what, what do I do, do I have today?

I start like, messing around with sound design. I start like throwing shit at the wall, basically like, you know, playing some chords, doing this, doing that until eventually I get to an idea where I'm like, oh shit. That's actually kind of, that sparks some, some inspiration and now I'll stop building on that idea.

Well with AI now you can just spit out a thousand ideas in like 10 minutes and maybe 999 of them will be shit. But like one of 'em will probably spark some sort of idea and then you instantly can get like on the path to making a song out of that idea and building upon that idea and and whatnot. So that's already a thing.

And the other thing is that I used to spend a lot of days doing sound design where, and again, it was a pretty randomized process. It's just like throw a bunch of effects on shit. Run samples through all those effects, run synthesizers through all those effects, like just do a bunch of stuff and record it all down to audio, then save it all to a folder.

So when I am actually writing music, I have like a whole bank of sounds that I made that I thought were cool and saved. And now AI can just kind of do that. You can just train it on a folder of sounds that are similar, like a bunch of virtual riot bases or a bunch of cyran kicks or something, and then all of a sudden you have a model file that can just generate those.

And like I've generated plenty of bass sounds and stuff already with it, and they're better than bases that I can make. They sound awesome.

Michael: Wow. What's the is it like a plugin that you use to generate, to reference and generate the new model? No,

Bill: it's called GitHub. Sorry, not GitHub. It's called dance Diffusion and it's on GitHub and it's some code that you can install locally on your computer if you know how to install stuff from GitHub or there's some like online Google collab thing that somebody made to run it through the Google GPUs.

But it's better to run locally. It's a lot faster, but you need to have a pretty powerful G P U and you need to be using Windows

Michael: to do that. Mm-hmm. So cool. I mean, a couple of things that I've seen coming up that I think are just like, you know, gonna change the game for sure. And I know there's like a lot of stuff to figure out.

Is one is around like the AI voice models and being able to create, you know, like an artist model. Like here's ai, Drake, and you know, I, but this is an AI Drake that I can like generate anything I want, you know, like a couple clicks of a button. It seems like that's already opening up a big can of worms around like copyright and ownership.

But you know, to, to your point around like. Being able to eliminate the blank page syndrome. Just imagine like if all of us had our own artist model that was trained based on our data and who we are, that you could click a button. And it sounds like you actually have played around with with doing something like this, with like, yeah, this is already

Bill: possible.

Yeah, it's totally possible. Would you like me to play you some stuff that, that I've made with it? Yeah, for sure. All right. One sec. Let me move my cat so I can access my keyboard. Sorry. You do. All right, here, let me how do I share audio through here? I guess, I have to use the m m E driver. One second.

I'll get this set up. Yeah, it'd be cool to show people some of this stuff, 'cause it's pretty fucking mind blowing, like what it can do already. Wow. And I think people are not aware of this, that it's like, it's already there. It's not there, like to the degree of like, it can produce perfectly finished sounding songs, but I mean, it's, it's a great tool already.

So, yeah. Let me just pull it up before I start sharing. Yeah,

Michael: cool. There's I think there's like a share sound button on like share screen as well. I don't know if that'll be easier than than sharing it through like a third party app, but on Zoom, if you go like share screen. Oh yeah, yeah, yeah,

Bill: totally.

That's what I, yeah, that's what I'm gonna do. I'm just trying to find the folders first so I'm not like, fumbling around too much. So I want to find this specific folder of bases that I made that are really good and then I want to find this specific. Alder of

Michael: songs. Cool. Yeah. 1, 1, 1 thing that popped in my head as, as you're describing AI and like how it's gonna impact music production.

I don't know if this exists yet, but it seems like it's just a matter of time and probably like a very short amount of time until one of the DA w's like, figures this out. But, you know, if you could open up, create, create new Ableton live set. And it's like starts with a prompt and you just type in like, you know, what kind of, what kind of, what are you looking to create today?

And you could like type in, you know, references or you could type in like whatever it is you're looking to create, and then click go and then it creates all the instrument tracks for you and it creates. Like midi, you know, like along with each of those tracks. So you could just click play and it's like, not only is it, it generated the audio, but it generated all the midi and it generated all the different instrumentation.

So you could tweak it, you could play around with it, or you could say, you know, I don't like the drum track on this. Could you redo the drums? That seems like that's gonna happen here at sometime in the next few years, right? Yeah,

Bill: I think so. All right. Here, hold on. I'm gonna share my sound here. And my screen.

All right. You can see my screen here.

Michael: Yep. I think it's just, there we go. Yep.

Bill: And can you hear this audio okay if I play it? Oh, yeah. Yeah. Okay. Cool. Cool. All right, so I've got two folders here. One of them is like full of bases, so I'll just pull in a couple at random. Yeah, I'll just pull in like, I don't know this many bases and we can have a listen to them.

Michael: Don't kill Bill. He's like a pretty cool dude.

Bill: Okay, so these are all AI generated. Basically what I did is I trained a whole model on basis that I've made, and then I trained a whole model on basis that virtual riot has made, and then I made a 50 50 blend from both models. And then it outputted like, like I have, if you look at this folder, I have like thousands of.

So I just like left it running for a few hours and generated a bunch of these. So they sound like this. That one's kind of shitty. No one's okay. Kinda shitty. That's not bad. That's pretty good.

Yeah. Some of them are kind of shitty, but like some of them are like pretty quality bases, I would say. So that's one. A thing that you can do with it is sound design, which I've been using it for a lot. And then the other thing that you can do is you can train it on music and you can have it generate music.

So, here's some examples of that. So this was trained on all of my stuff, and now it can just spit out like a bunch of, sort of ideas, which sound kind of weird, but you'll see what I mean.

Let's go to like a louder pot maybe over here.

Michael: So this is all completely generated with ai. Yeah.

Bill: Every now and then it will like skip into a different idea that it has and it would just be like crazy. But yeah, I mean, it sounds like my beats, you know, like not quite.

So it's all like trained on just like, I don't know, a hundred of my songs or something. And then it just generates a bunch of stuff like this, which sounds like it's reminiscent of my music, but it's, you know, it's a little weird 'cause it's all done by a diffusion. So you might've noticed, like it has all that kind of white, noisy type of quality to it.

Okay. But yeah, I think it's only gonna like improve when they start improving the in coders and stuff.

Michael: Dang, dude. That's wild. And so what what platform did you use to train the model and actually generate these audio files?

Bill: That's all. That's all dance diffusion. That one that I was telling you about before that you're like, it's on GitHub.

It's by harm on ai and it's called like sample generator or something like that. It's called Dance Diffusion. That's what it's called.

Michael: Yeah. Okay. Wow. And so I, I'm curious, I, I've heard some AI generated examples from like the Beatles or like, you know, with songs with John Lennon, where like it's an ai John Lennon, which is just mind blowing.

Is that one of the, is that a tool that lets you train up and then like, like how does that work? Like, because, because then it's, it's not only is it that the sound

Bill: that's a different thing that's called Sovi or S V C. And that's just a different type of model trainer, basically that's like, designed more to train just voices and stuff.

Michael: Okay. Interesting. 'cause that, that's the thing where, you know, when we were, I was just talking a little bit about that, that idea of like a d a W you know, you type in a template track and then it creates seven different instruments and then it has the midi, you know, notes for each one. You can like, play around with.

'cause I guess it would be a, a different task. To, you know, create the top line or create like a singer, you know, singing 'cause, and then also, you know, you need the intelligence to, you know, I, I bet if it just was left to its own devices, someone would be singing, but like the words wouldn't mean anything or wouldn't be like mm-hmm.

Could actually coherently like saying things. But clearly like people are, have been working on this in a way that like you can actually train the voice model to say certain things. It is interesting. What's your, what's your perspective on like, ownership or what how should we be able to play around with these tools?

Like if I, if I could have a AI Drake, you know, in my DAW who should get royalties? Like should he get paid based on that or should we be able to use that like freely, like creatively or where do you see that can of worms ending? So

Bill: we're in a real gray area right now, obviously, and the way that I'm looking at it is just as long as I'm not fucking anyone over, it's fine.

You know, like if I use a little bit of the Drake vocalize thing to do something in my song, am I really taking food off Drake's table? Probably not. So I don't know, as long as I'm not like fucking somebody over, like if I'm making a, a video trained on you and I'm putting some like fake shit up on the internet of you saying, you know, stuff about loving Hitler or something, then obviously like, I'm fucking you over.

So I wouldn't do that, you know? So I think like that's the way I look at it. As long as you're not doing like some heinous shit to somebody else, it's. Probably fine, but like I said, we're in a gray area. We don't know. Some people are like, if you train any data on like somebody else's shit, then that should be totally illegal.

Other people are like, no, it's fine. I don't know, I just think like, don't hurt anyone, and that seems like a pretty good general ground rule.

Michael: It is. That seems like a good human, human role. Like if, if you know, you come, if you come at life from a standpoint of, you know, you're trying to provide value, you're trying to serve people or contribute and you know, make mm-hmm.

Make lives better for everyone. It seems like it's kind of hard to, hard for that to backfire. I mean, as long as you're willing to like acknowledge if, if you make mistakes like, you know, intent does matter.

Bill: Totally. Cool. You also have to be pretty like, objective about that intent, right? 'cause like something that you might think is totally fine, somebody else might not.

So Yeah. You just have to be like thoughtful and mindful of like what you're doing and whether or not it's hurting somebody.

Michael: Yeah, absolutely. Cool, man. So we, we have ventured quite, quite a distance in this conversation. We've gone to Neuralink, we've talked about ai, the future we've brought it back to today and some of the things that you're building right now in terms of AI tools and being able to, you know, leverage the existing sounds and data in the songs that you've created to be able to generate, you know, new creative material.

Pretty dang awesome. So Bill, thank you so much for taking the time to hop on here and, and share some of the lessons and, and wisdom that you've learned through your experience. And for anyone that's listening or watching this right now and would like to, you know, connect more or dig, dig deeper where do you recommend they go to connect more with you?

Bill: Probably my website, mrbillstunes.com. But if you don't want to go to my website and you just wanna connect with me on social media, I'm just Mr. Bill's Tunes everywhere, like Instagram, Facebook, just whatever. Just type Mr. Bill's Tunes and you'll find me.

Michael: Alright. Awesome. And like always, we'll put all the links in the show notes for easy access and how do I know that I'm talking to the real Mr.

Bill right now. And this isn't just like the AI model of you.

Bill: I don't know. I mean, I, I guess there is no way to know, like I could've just built a model that's just doing this whole thing for me. 'cause I do have like thousands of hours of teaching material and podcasts. So maybe I have just trained a model and whatnot, you know,

hard to say.

Michael: Insert twilight music theme song in the background. Awesome.

Bill: Cool, man. The, the good thing though is if you are, if there is a possibility that you're talking to the fake Mr. Bill, If I put my foot in my mouth at any point in time, I can blame it on the AI.

Michael: There you go. Yeah. That's plausible deniability.

We're all about to have that one pretty soon probably, so. There we go. Cool. Yeah.