Episode 232: Ethics and Opportunity in AI Music Creation with Hazel Savage

LISTEN TO THE EPISODE:

 
 

Scroll down for resources and transcript:

Hazel Savage is a music-tech trailblazer, guitarist, and former CEO/Co-Founder of Musiio. Beginning her journey with Shazam, she gained deep industry insights at Pandora, Universal, and HMV before founding Musiio, which was later acquired by SoundCloud. As a global speaker and advocate for AI in music, Hazel blends her expertise with relatable, often humorous, anecdotes, offering valuable perspectives on AI’s impact on the music industry.

In this episode, Hazel shares her insights into the fast-evolving world of AI in music, from ethical dilemmas to creative opportunities.

Takeaways: 

  • How AI is transforming music creation by enhancing human creativity and making production more accessible

  • The ethical implications of AI-generated music and how artists can protect their rights in the face of evolving technology

  • Why "strong beliefs loosely held" can be a powerful mindset for adapting to the rapid changes in tech and AI

Michael Walker: Alright, so I'm excited to be here with my new friend, Hazel Savage. Hazel, I gotta just start off by just saying, like, that's one of the most badass names I've heard, Hazel Savage, like, wow.

Hazel Savage: Yeah. Thank you. Thank you very much. And just, I have had that comment a few times before people often ask if it's like a stage name, cause I used to be in a band when I was younger but no, it's my real name and actually Savage as a surname is quite… it's not uncommon in the UK.

So we always used to joke in my hometown, obviously my parents, both Savages, and at two doors down, there was a whole other family of Savages, no relation. So, you know, it's not uncommon.

Michael: You know, with Savage as your last name, I'm sure that, like, you hear that a lot. There's a lot of jokes, like, probably growing up your whole life. It's like, oh, here comes another Savage joke. 

Hazel: Actually, last time you were in the US, someone working for an airline asked if I changed my name to be more like the Rihanna song, and I was like, no, that's my real name. But that would be a good reason or a good story. So maybe I should have.

Michael: Maybe the song is reversed. Maybe the song was after, you know, after you. 

I'm excited to connect with you today. So just a quick intro for Hazel. She's a music tech veteran and entrepreneur, former CEO, co-founder of Musiio, which sold to SoundCloud in 2022. So she has over 15 years of industry experience.

She's been pioneering AI in music. She was an early employee at Shazam, played roles in Pandora, Universal, HMV. Now she's a global speaker on AI integration and digital transformation. Woo. This is… I'd love talking about AI and what's happening right now. I'm really looking forward to connecting about this.

And she's also an advocate for women in tech and music. So she offers insights, humor on navigating male dominated industries with a focus on AI technologies, real world applications. So I'm really excited to have her on the podcast today to be able to discuss and explore what's currently happening right now around music and AI, and certainly AI is affecting everything, but particularly in music and creative industries, it seems like there's the wild west.

So it's an important time to be having conversations like this. So Hazel, thank you so much for taking the time to be here today.

Hazel: Thank you so much. Thank you so much for having me. And, you know, it's really exciting cause I originally started my company, and we were doing AI back in 2018. And I always joke because back in 2018, kind of no one was entrusted, and then I would say in the last 12 months, it's become kind of one of the hottest topics to talk about. So I always wanted to talk about it. I'm just now glad that so does everyone else.

Michael: It's super interesting. Yeah, no, I've heard that quite a bit that, you know, in a lot of cases, like AI is kind of having its moment in the sun, but there's been so much that's happened for many years now related to AI and research, like the work that you were doing. And now, you know, what would you call it? Kind of the chat moment where it kind of burst through and now everything's happening. 

Hazel: I think it just kind of hit that level of practicality where the compute was cheap enough and the models were good enough that people could sort of try it and be impressed. But, you know, for this sort of technology has been around for a long time. It's just that the things that it used to be able to do were not super impressive or like, you know, not better than any human being could do.

So therefore, not really of interest to most people. So, but yeah, we live in exciting times.

Michael: Absolutely. You know, I wonder if there's any sort of, from your perspective, the growth of AI, do you see it as an extension of just, you know, like digital intelligence, since the beginning of computation, you know, like we have had calculators and different things and it's, you know, over time I know that there's like a moving threshold to like, as we have new capabilities, we're pretty fast to be like, oh yeah, like this is normal.

Or like, oh yeah, like the internet's like, whew, like, yeah, we can talk through a portal on a little device across the world and we can, you know, connect in real time. Like, we take so much for granted now. And so I'm curious to hear your thoughts on the evolution of, of AI as you've had so much experience kind of exploring that.

Hazel: Definitely. And you know, even back in 2018, we made the sort of conscious decision that what we were actually doing and what most AI companies are doing is what we call machine learning in computing, which is the act of teaching a machine to replicate a behavior, to the point where it mimics.

That it's artificially intelligent. But now everyone's just really comfortable with the term AI, which is really much more of a kind of a sci fi, all encompassing sort of catchphrase but realistically, we've been training AI models to replicate certain tasks for a long time. I just think that we've had that breakthrough where it's either exceeding human capability.

And I'll use Musiio as an example, which is, you know, a human being can like manually tag 200 songs in a day. Like I could sit and listen to 200 songs, write down the key, write down the BPM. And AI can do like 5 million a day. It's a super tedious task. The computer does it more accurately, and it does it just as well as a human.

So that's super impressive. So why wouldn't we want to utilize technology in this way? I think, you know, there have been other areas where it gets more controversial, where, you know, in the past we've used, machine learning and AI to replicate, like, a human singing voice, with mixed results. And it's that kind of thing where it's kind of like, oh, it can do it.

It doesn't sound good. And so then, you know, when the output is impressive, we all kind of decide collectively as an industry, as a community, as human beings, that we don't really value it, but we're right at that interesting intersection where it's now able to do more things than not.

Okay. You know, where we are now in 2024 as well, sort of culturally, there've been a lot of layoffs this year. There've been a lot of… a lot of teams are downsized. Everyone's running lean teams, agile teams, businesses are trying to be profit forward, not growth forward. And that's led to more automation as well.

How can we do more with what we already have? And I think those, a couple of these factors together have pushed this technology forward.

Michael: Super interesting. Hazel, give it to me straight. How are things looking for us humans?

Hazel: Pretty good, I think. I remain optimistic and I'm sort of pro technology, but I'm also sort of pro human being and that there's just, certain things that humans can do better. I remember I spoke at an event, an AI conference called CogX, a few years ago, and, you know, one of the panelists as an intro to the panel showed four pieces of artwork and asked us to look at them and they all looked amazing.

And create an impression in our heads and see what we thought. And then, the kind of wannabe gotcha moment was three out of the four were AI generated, and this was before DALI, before mid journey. So when it was less accessible and, you know, a lot of the people, then the debate got into, well, does it matter if something's AI created, if you can't tell, or if you don't know.

And I was like, it's fine. Like, I looked at the artwork. I liked some of it. I didn't like some of it. Now I know which ones are AI generated. I still like and dislike the same ones. My point would just be, if it's AI generated, like, what's the end game here that you tricked me that I couldn't tell, you know, and that's kind of where we're getting into now, which is how important is it that you can tell what's going on?

Does the background behind the art matter more? You know, and these are the more sort of humanistic questions that we struggle with. And even sometimes as well, I've been listening to a lot of AI generated music recently. And I feel like as a human being, like my personal opinion is I can tell, about 80 percent of the time, if a song is AI generated, there's just like a couple of tells right in the melody, the vocal, the style, the structure, the lyrics, but I'm not a hundred percent sure all of the time.

And, you know, we will get to that point where we'll be like, okay, well, does it matter? But also we can't train another AI to identify AI songs. If I, as a human being can't tell, right? That's another one of the challenges.

Michael: It's a little bit like the Turing test of AI music. If it can fool us, then do the means matter? Super interesting. You mentioned some use cases that even, you know, building and using AI for, many years now. And there's some use cases that feel very.

Not controversial. And then there's some that feel very controversial. And it seems like, yeah, there's, there's nuances too. Like we had Nolan Arbaugh on our podcast about a month ago, and he's the first human patient to have a Neuralink installed - which is a neural interface where he's basically like the first telepathic human, and he can control a cursor with his thoughts.

And we had him on the podcast to create the first song ever telepathically. So he used AI platforms with his thoughts to create a song. It was really cool.

Hazel: That's wild.

Michael: Yeah, that's one case where I think most people would be like, okay, this is someone who is paralyzed below his neck and, you know, he's no way he'd be able to create like, music with.

Instruments, and now because of this technology can extend his creativity, and so there's less like moral issues with that, but as it relates to this idea of, someone like having music that's completely AI generated, that's devoid of like human contact, I'm curious how you recommend that. 

Artists who are kind of watching this right now. We're in the middle of the wild west and we're seeing these amazing tools and these amazing capabilities of, you know, that we can use to, potentially extend our own creativity. There's a lot of people though, who feel mixed feelings of like, is this wrong or is this cheating?

Am I losing something by using these tools? So I'm curious what your mindset is or what you might suggest for artists as it relates to this new technology.

Michael: It is the most interesting time and I'm absolutely fascinated by the Neuralink example. And, uh, yeah, I don't think most people would sort of take begrudge this individual, you know, if he's paralyzed from below the neck, like having this sort of this accessible technology and a lot of great technologies come to us via these applications.

I think it's really interesting because we're seeing it unfold in the courts in real time. And we're seeing companies appearing that are generating AI based music. And I'll sort of think all the way back to, it would probably have in 2019, I tried a very sort of early, generative AI app.

And it was like on an iPhone and basically you just pressed anywhere on the screen and like music just started playing. Now, as someone who can actually play the guitar, I was like, oh, this isn't like, I can tell I'm not using the same neural pathways that I use when I actually play and write music.

So, and also you weren't able to create anything bad. Like it wouldn't matter if you weren't tapping in time or if you didn't move in an interesting way, the AI would sort of auto correct you to eventually get something that sounded okay. Whereas the ability to fail. And fail massively is a huge part of creativity.

The ability to write a song and go, that's absolutely terrible. I won't play that for anyone. I'm just going to discard that immediately. Even if I've spent a few hours this afternoon working on it. So I could even just tell the same parts of my brain and not being used when I have used very simplistic generative AI apps to when I write music.

So I think from almost from an artistic perspective, we're going to approach it differently. But at the same time, the ability that maybe that you don't have to own an instrument and have had any training to be able to access it, there's something that appeals to me about the meritocracy of it all, that it could just be accessible to everyone.

Like, why should it be gatekept by people with the wealth to own physical instruments or have had the training? So there's something about me from an artistic perspective that likes the idea that the technology makes it more available. I think where we're getting the most push and pull in our industry at the minute is large generative AI models that have been trained using other artists data and other artists who are, who are not aware that their data has been used, did not consent to their data being used.

And that's the biggest back and forth we're seeing right now. What constitutes fair use? How will these artists be compensated? What does an opt out look like for people who want, you know, in the same way, hey, don't use my music at your political rally. Don't use my music to train your model.

Like people should have, I feel like people should have some choice, even if they put their artistic work out into the world, it's still associated to them. They still feel like they should have some kind of, you know, rights over their materials. So I think that's where we're seeing the biggest conflict, but I think it will evolve over time, and it will, to some extent, play out in the courts, I think.

Michael: It sounds like what you're saying is that there's different ways that you can use this technology and the first kind that you used with tapping on something and you weren't even being creative really, but it's just like, whoa, like there's music being made, but it wasn't really connected to that, your same neural pathways.

It wasn't connected to that creative process. And so, you know, what you envision as, you know, maybe more and more of a movement towards these tools, augmenting our current, you know, neural pathways, our current capabilities, we can like, you'll be playing for me. I just have this visual of me, like, playing a song and then all of a sudden, like a symphony just like starts appearing behind me and like, you know, the AI, like can accompany me and like extend it.

And then I can say, you know what? Like, actually I want to add an oboe and then it just like adds an oboe, almost like a producer. And that feels, like, wow, like, that would just like, you're mentioning extend the creativity without necessarily just like, replacing.

Hazel: A lot of these augmentative tools are less controversial and already quite popular. And I caveat that with the fact that I've invested in a couple of companies in this space. So like, Audioshake, which can do the source separation, is a company I've invested in. So the ability to drop in a track and then get the, the stems, the guitar, the bass, the drums, the vocal, whatever.

Cause once you have that, you can do more with the material. With the right permissions, of course. And then another company that I invested in master channel, which is an AI based mastering. So again, for people who don't have the several thousand dollars to pay a professional mastering producer to finish their work in the studio.

The AI solution is fantastic. Again, it opens up that creativity. So I'm very sort of focused on the technologies that create more opportunity. And I think you're right. You know, I heard, I was at a hackathon once and one of the student teams that were building, they wanted to build something where when it was like a phone app, but where when you start rapping, it also generates the beat.

For you to wrap over and I was just thinking oh, I love that idea because think of all the bedroom musicians who again don't have that production ability. Don't have the whatever but they want to they're creative. They've got their own. They've got their vocals. That's always with you. So little tools like this.

I think have the ability to enhance I think where we get into the slight sort of craziness is, you know at one of my colleagues at Soundcloud. He just, you know, typed out some lyrics and dropped it into one of these, and here you go with a full song. And I listened to it and I was like, it's actually quite a good song.

And I think I can even hear the source material, which is undeclared, because of the type of music it is. But there's no, there's no sort of even through line there between, you know, the guy that I work with who's not, a musician and suddenly a fully created song and it's I think as human beings it's like. We're still trying to distill how far they can be from each other until we are not really recognizing the creativity.

And also as human beings, how comfortable are we with that distance? Right? And maybe I have my personal choice. I'm certainly probably more pro AI than a lot of the people that I meet in the music industry. Most people are just very intimidated, very anti technology and AI, you know, but then, I worked in the music industry when most people were anti music streaming, which I feel like would be a very bizarre opinion to hold now.

You know, somebody came up to me and was like, I don't agree with any of this music streaming, Spotify, YouTube, but I'd be like, you're in the massive minority now. But 10 years ago, that wasn't even a weird statement. So I think we have, we evolve and I think we will, we'll figure out sometimes where those boundaries are.

Michael: It's a great point. Yeah. And the ability to write the lyrics for the song and then to have it just like, you know, create them in front of your eyes is pretty amazing for folks who aren't even necessarily musicians who can create music and, you know, one of those main controversial points that I'd be interested in hearing your opinion on a perspective.

And, you know, it goes without saying that, like, you know, this is all still the wild west. We're figuring this out together as, you know, as an industry and as humans. But, this idea of the training data and what constitutes like, what's okay, what's ethical for this AI to be able to be trained on.

And this doesn't just pertain to music. It pertains to like all kinds of data and video data and opening eyes, and some of the like videos I've seen where they're like, we're not going to say what it was trained on because you know, it's going to raise some of those controversial, discussions, but at the same time, like we need to have these discussions and kind of figure it out.

On one hand, you know, I can understand the case that all of us as humans are like, in some ways, trained on each other's music and data. And like, we all have influences and, you know, it's not quite as black and white, when we're talking about, you know, training, like there's, you know, the Ed Sheeran case where like the court progressions, you know, there it's can be challenging to find that line sometimes where inspiration become copyright, yeah, I'd be curious to hear your general feelings and thoughts about that topic as, you know, musicians.

Hazel: I, have some thoughts on this and I totally get it, which is like anything that the human brain learns and we can recall at any time, we obviously don't have to pay for that recall. So, for example, the song Bohemian Rhapsody by Queen, right? I know it, I've heard it many times, maybe on the radio, maybe on CD.

I can sing it to myself in my own head right now. I'm not going to sing it out loud because then we'd have to pay for the reproduction of that. But if I just play it in my head for my own enjoyment, nobody's going to charge me for that reproduction. Maybe because it's in my head and they can't prove it.

Maybe if they could, they would. But, maybe if the Neuralink knows how many songs I've, you know, sung in my own head.

Michael: This song was inspired by this part of your brain that listened to Bohemian. And then it's like, what's original cause. It's like, well, that song is inspired by their influences and that influence inspired by their influences.

Hazel: I was thinking more that they would start to charge me like a streaming rate for every song that I get stuck in my head, like you know, where they count how many times something got played, if they count how many times it plays in our brain, currently they have no way to know that, so they can't charge for it, but maybe they would if they could.

Michael: I'll talk to Nolan for you. We'll see if we can, if he can hook you up with a Neuralink.

Hazel: Getting charged for those thoughts, this is very dystopian, I have to say, you know, I have to.

Michael: It does kind of take adventure in that territory for.

Hazel: We're going in that direction, but I do think the idea that I can sing any song in my head at any time and I'm technically using my own organic neural network to reproduce that is kind of the action that human brains perform, but a computer is doing something not dissimilar when it's replicating.

And I'm sure this will be the argument or part of the argument over what constitutes fair use or a copy. I think where it's more challenging, like I say, with my friend who just typed the lyrics and all of a sudden he's generated a song, is people feel like, you know, and whether it's images that have been used or text, I mean, there's a big part of me that just goes, you know, people have said to me before, things like, you know, whatever you type in your, you know, messenger app, that data is being used to train language models for AI.

And I'm like, yeah, it probably is. I bet if you read the terms of service, it says they can. And I kind of don't care. Cause I talk about almost nothing of any interest whatsoever, on any of my messages. It's just like, getting a cup of tea meeting in the pub, nothing of any interest.

And then you remember, I don't know if you remember this as well, I'm definitely showing my age here, but it feels like about 10 years ago, there was a thing on Facebook that they were calling the 10 year challenge. And what you did was you posted a current photo, and then you posted a photo of yourself that was exactly 10 years old.

And you know, the smart people in the room were like, we're just essentially building an amazing data set for how human faces age over a 10 year period. And of course, now there are lots of apps that can make you look older, make you look younger, all of this, you know, switch your gender. They can do all of this stuff.

So yes, as free social media users, we've probably all contributed to a data set like that. And maybe we care, maybe we don't care. I don't think anyone's making much money off my one individual selfie. But I think, yeah, when it comes to say, and I'll use Queen and Bohemian Rhapsody, their catalogue as an example, again, that's those art, that artist's livelihoods, that's how they've made all of their money.

And does somebody else have the right to use that and not even let them know or share any of that wealth or give them permission? Those are, those are the questions that we're, that we're currently answering.

Michael: Yeah. Those are interesting questions to answer. Because, I mean, you can kind of see, like, both sides for sure. Because, on one hand, it's never been so difficult. Easy to like, detect or sort of like to program, you know, like with these machines, like it's very clear like you're taking this data, like, and you're just inputting it into a machine and then it's creating new original output based on the inspiration.

And even if that's also how our human brains operate, it doesn't feel like we're not necessarily aware of it in the same way. Like if we grew up together, if we lived at the same house for like 40 years and all we listened to Bohemian Rhapsody by Queen, and then we learned how to play an instrument and that was the only song we'd ever heard.

You know, I bet that the songs that we created would sound kind of like Bohemian Rhapsody, you know? And there's a line where it's like, well, if it sounds, when you hear it, if it's too close, then it's like, you know, that's copyright. But then like, even copyright can feel like kind of a murky territory because, you know, inspiration and cause and effect, and now we can venture into the world of free will.

And where does original cause come from? And that's what we're exploring.

Hazel: Kind of, I've kind of posed more questions than I've answered, but just from a sort of a personal opinion, point of view and, you know, and I always say, I reserve the right to change my opinion in the future as I get more information or more data. But I probably personally currently come down on the side of, there needs to be a way to compensate the original musical.

Owners or the musical, either the owners of the recording, the owners of the composition. Rights are complicated. I get that. It's not just like one person that owns Bohemian Rhapsody, right? It's a multitude of publishers and labels and individuals. But I still think that if there is a direct line from using material to train something to generate an output, then we need to compensate the original owners of the material is where I stand on it currently.

I don't think, you know, even when I was building Musiio and we were training our first AI models to be able to say, detect key, detect BPM, like we didn't use material that we did not have the right to use. So like we did a deal with like three different labels and companies to access, like our first million tracks.

When we had that first million, we were able to build our first models that we could prove what we did worked. We were then able to then. Work with more people, get more data, et cetera. But, you know, cause I remember when I was first pitching to investors and they were like. Oh, did you just scrape everything off Spotify?

And I was like, no, hey, that's not legal. And so that's not what we did, but I think the problem, especially, you know, even from an investment perspective is, if you're not in music, it's not always totally clear what you can and can't do. Even if you work in music, it's not always clear what you can and can't do.

But I was just very conscious because of my background in music and having worked for Shazam, I was like, the problem is it's technically not that difficult to get all of the world's music. It's all online. It's all on every streaming service. It's not technically difficult to get it, but it's, to my understanding, not legal to currently just go and help yourself to everything.

So that's kind of where some of the disconnect has been. So, so, you know, we didn't want to build a company that would be open to legal challenges and have issues. So we made sure we were building with legal data from day one. And I think my personal opinion still comes down on the side of, then, you know, not that you just, you need to pay for what you use.

You probably do, but that it's because the imperative is that if you're going to use something to create something, financial wealth or benefit for tech, for yourself, for a new company, you need to recognize the contributions of something core to that business. Almost in the same way that like, you know, if you go to the gym and there's a dance class or a spinning class that uses music, they have to pay to use that music.

It's an integral part of the experience. So I think I kind of fall on that side of what, you know, we haven't figured it out, but just at its core, I personally believe that if you're going to generate something new, the money needs to flow back to the assistive works, I think. It's my personal opinion.

Michael: Yeah, that I think that makes a ton of sense. And I also appreciate just the yeah, I think it is really important that what you started at the beginning of, you know, this is what I believe, this is my personal opinion right now. And, you know, I also reserve the right to, you know, to update my opinions as new data comes in as I learn.

I think it's just a really powerful mindset. In general, regardless of what you're discussing and what you're talking about, I think it shows an open mindedness to like, learn and grow.

Hazel: Because I think otherwise as well, we end up in this position where no one will give an opinion because they never want to be quoted on it or they don't want it to like next…

Michael: Backfire later.

Hazel: Yeah, for it to sound incorrect. So it's just kind of caveat with it, because of course I have an opinion.

Most people do, whether they'll. Express it or not. but then on the other side, as I say, there's new data coming out all the time. There's new technology, there's new ways of using the products. And so, the way I feel right now is based on the information I have right now. And I feel like try. I try and apply this approach to most things.

In fact, when I started museum, we were an incubator called entrepreneur first. They had this kind of catchphrase, which was like strong beliefs, loosely held so the idea that you have to be like really firm and really adamant and positive in your beliefs until proven otherwise, like, and some of the biggest mistakes I see in, in the startup world and in company world is when people are so hooked on their idea that even if.

When you start to question them, you can tell it's not standing up or it doesn't make sense. And they'll do a lot of, they'll jump through sort of many mental hoops to kind of justify to themselves because they're so convinced and they won't let go of this idea. So that can be, that can be a challenge in terms of building a company, but you have to be really committed to what you're until the data tells you otherwise.

Michael: Super smart. Yeah. It reminds me of one of my favorite phrases. The map is not the territory. Which is basically about this concept exactly of like, we all have our beliefs and our beliefs and our ideas, are sort of like a map that gives us context to understand the world and the universe as it is.

And that's gonna be really helpful for navigating and kind of helping us get from point A to point B. But where we can fall into trouble sometimes is when we confuse our maps for the actual thing. And we actually think like our map is the territory and, you know, to the point that, that you just made, like, you know, the idea of when we're so attached to our map, when we're identified with it, we think this is the way.

And then someone else says, hey, you know what? Like my map actually says it looks like this. And let's figure this out if you're really attached to this map and maybe the territory changed over time as well. Maybe the map was right. And then kind of the things shifted and evolved. That's where it seems like that mindset and reserving the rght to learn and grow and evolve and being willing to, what was that way you put it? Like have strong beliefs, loosely held, really valuable, principle.

Hazel: Yeah, and it can, it works both ways as well. You know, I've met too many founders who won't let go of the map. You know, it's like, but also at the same time, if you meet someone who doesn't appear to believe in their own convictions of their idea, that's just an instant no as well, because if they don't believe it enough to persuade you that it's a good idea, then, you know, so that strong belief, you know, someone has to be really passionate, they have to believe in what they're doing, because if they don't, why would anyone else?

Michael: It's a great point. And I haven't really thought about this analogy in this way. And this is where the algae, like, you know, kind of reaches its edge a little bit, but, you know, in some ways, like it really is possible to have a reality distortion field, you know, like when you hear like Steve jobs and some of the, you know, um, some founders who just changed the world.

You know, it's the square pegs and the round holes kind of thing, where it's like, literally they had a map and a vision of like, what could be, and it reshaped the territory around their map because they had such a strong, you know, belief and vision. And, it is interesting how this come together, but ultimately it does seem like having the awareness to sort of recognize that, okay, like this is one perspective.

This is one thing that I believe in, but I have the willingness to like, to update my map and say, oh, it turns out that things are a little bit different than what I thought. And I learned from listening to others’ maps. And if someone has a different map, it's not a personal attack or like, you know, it's something I need to be worried about, but it's like an opportunity to understand and to learn and update the map.

Hazel: And bringing it back to that exact example of like music and generative AI music, you know, I was really fortunate to be on a panel about this time last year in Australia with Simon Franklin, who's a music composer, and he did the Avatar Way of Water soundtrack, and he's done and, oh, he was like the producer on Celine Dion's My Heart Will Go On.

Like, basically a legend, an absolute legend. And so, he and I often have since then spoken on a few panels together because we approach AI generative music from a different stance. And this is someone who's made their entire living composing their own music. And I think when you are open to different perspectives, but you can still have a respectful discussion about, you know, where the limitations are, where the benefits are, but what the challenges are, like, and, you know, one of his salient points was, yeah, sure, he's fine.

He's the composer for Avatar. You know, when the next one comes out, he'll be the composer for Avatar. Like, he's pretty set, but he got his start, you know, writing for advertising, writing for jingles, writing background music. And if that work is going to disappear, then what path exists for the hymn that was born 20 years ago, right?

It's like, if we remove the lower layer of work, there's a part of us that can say, well, that's not super important because it's just background music and it's not that exciting. But if that becomes the pathway to becoming a composer, what does that do to our ecosystem and our industry? If that work no longer exists, or there's no economy within music to now support that middle ground. So that's where it's important to have these discussions, but also hear from perspectives of people who, some like myself who are in tech, but from others who aren't.

Michael: Absolutely. You know, they can't believe that I didn't even know I had this memory, like stored in my recesses, but there's a show that I probably watched four or five years ago called Designated Survivor,

Hazel: I remember it. 

Michael: You remember that show? It was good at Keith Sutherland in it. Yeah. He, the main plot line of the show is, you know, all the political figures got wiped out through an accident.

And so he just unlikely the guy that became president.

Hazel: Obviously I'm British, so obviously we don't really have a president, but I was thinking more along the lines of the premise of that show was that every time all of the leadership in America meets one person in the room. He’s not there, and they are the designated survivor.

And obviously the premise of the show is, they then, that becomes relevant, he's the one surviving member. But is that even true, or did they make that up for the show?

Michael: It's a good question. Yeah, I'm not entirely sure. I think that, I don't know, it seems like it might be a thing, but the reason I brought it up in this, you know, this came up was, I just remember there was one quote from the show that was like, yeah, you know, like that's actually a great point because it kind of flipped how I viewed because I think I tend to lean towards tech optimism as well, probably to a fault in some cases where I don't necessarily always examine some of the risks on the other side, which are so important and valid.

And so I think the way I generally approach technology is that, you know, there's so many revolutions and there's been lots of from previous roles and jobs that, you know, used to need people to operate machinery at factories or before we had machinery at factories, like people were, you know, picking cotton and then these tools came and they replaced all these jobs and roles, but then like, it opened up creativity, we can spend more time and energy being more creative with our thoughts and over the trend has been more and more thought work, more and more creativity.

So I predict that, you know, we'll see the same thing with these tools as well as like an opening of creativity and more ability to use these tools to be more creative. But the, the reason that this came up is because a point that they made that kind of like, you know, countered, or at least help give me some perspective to that, counterpoint was.

You know, the president, Keith Sutherland, in this case, was opposing, uh, some like a technology military guy who wanted to push through like, you know, the newest technology and something. And this guy was like pretty evil on the show. So he was like not a good guy. He's a bad guy and he said something to the degree of like, yeah, like, you know, this is just like a part of like, you know, this is a part of the wave.

This is a part of like the tsunami. Like, you can't avoid this is going to happen. And it's like the flood, the flood is coming. And Keith Sutherland said something to the degree of… you know what? Like, I think you're right. Like the flood is coming, but that doesn't mean that we can't build boats to help us like, you know, float, like as it happens.

And so I think that that's a great point as it relates to these tools and these resources. Like I, personally, I believe that these are going to create huge, disruption and just like a change in the music industry. I think a lot of roles that previously existed, you know, they aren't going to exist anymore because, you know, we have these tools,

Hazel: You made a point before, and I've spoken, I've had this thought before, which is, if you imagine that every like bar or inn, you know, in the Victorian or medieval times or whatever, would probably have a live piano player. Now, when the first instance of recorded music, the wax cylinder, became available, like, the piano players of the bars were probably rightfully annoyed.

They probably lost their regular gig to, okay, yeah, but now we have a wax cylinder. We can just play the piano music in the bar all the time. We don't need a human to come in and, and we don't have to, we have to pay you every night. We bought that once and now we're good to go. So I think our, right the way back to that point, the music industry has been has changed.

Okay, so now maybe we don't need so many piano players in the bar, but we definitely need more people manufacturing wax cylinders and or recording music for wax cylinders. So, and I think we've seen that change and we continue to see that.

Michael: Yeah, going from live music to recorded music. What an amazing transformation there.

Hazel: Number of live music, piano players to number of people working in the manufacture and the distribution and the creation of, of that recorded music, that wax cylinder. And, and another angle I always come to it because I do come down on the tech positive side, but even just coming up through, building my own startup.

I was a part of a big community of other tech founders who were not necessarily in the music space. And I remember I was having a drink once with a really great AI founder, but his company did artificial intelligence detection for cancer in medical images. And it was such a fascinating discussion because I kind of, like, if you're having an AI look at a hundred images.

And it's going to tell you which of them have cancer in. You want it to be right. One hundred percent of the time you want it to get it right, positive or negative every single image. The impact of not doing is. Genuine life or death, right. In a lot of scenarios, whereas like I said, if I have to put like a hundred songs together on a playlist and ninety-nine of them are amazing and one's a bit like, yeah, that's not a great song, whatever the stakes aren't quite so high and you know, as well, you know, I did a great project with Rolls Royce back in the day and they were explaining to me that they use Rolls Royce that do the jet engines, they use AI to basically get real-time data from every engine of theirs, tens of thousands that are in the air at any time, and or on the ground, feeding back real-time data on, maintenance and maintenance schedules and when parts need replacing. Now I wanna know that the best tech in the world is telling an airplane I'm on when it needs a new engine or when something's not working.

So, when it comes to AI, I don't think the stakes are the same in every instance as well, and I think there are, you know, levels of error. A different depending on the industry, like what level of error can we accept? Like if I give you a playlist with a hundred songs, like how many can be not a great fit before you stop listening or before there's a direct impact to you?

But when it comes to some of these other industries, especially in the medical world or, you know, aeronautical science, the, the impact can be significant. So I think it's, uh, I think about all of these things and I, I take that into consideration when I talk to people who are potentially anti-technology, because yeah, I think in an ideal world, if it was looking at an image of mine, I know that AI is generally more accurate.

But I also want a human being to check, to double check the work, right? So, so, uh, and I think we need to, it's important to think about these, things, because it'll only become more and more prevalent.

Michael: Absolutely. Yeah, I thought about this a lot as it relates to self driving cars. And it seems like the threshold for self driving cars to like become ubiquitous is higher than sort of like logically it should be just from a standpoint of like life and death. Yeah, like there's probably a fair case that already you just have a lot of lives by just like going self-driving.

Hazel: This is nearly going to be one of the other examples I just mentioned. And this almost wanders into the territory of thought experiment because the number of accidents people are willing to accept from self-driving cars is almost zero, if not zero. However, the number of deaths we see on the road, whether we're talking about the UK and the US is significantly higher than zero, but for some reason as human beings, even if we say the ballpark, a hundred people die a week on the roads in the UK.

I actually have no idea what the number is off the top of my head, but say it's a hundred a week. Say if we switched entirely to self-driving cars, we could reduce that number from a hundred to fifty. We're still not okay with that, like a 50 percent reduction in deaths on the road every week in the UK.

For some reason as human beings, we're still not okay with that. There's something about us that if it's a human causing it, we accept it, but if it's technology, we don't. And so, unless that number of deaths on the road per week is zero, I don't think we'll see a move to self-driving cars. Because for whatever reason, zero is the number that humans are willing to accept from fully self-driving cars.

Which is kind of crazy.

Michael: It's really interesting. Yeah. There's like some interesting 

Hazel: Yeah. I don't know, maybe, someone else listening to this, a psychologist, can get in touch and tell us that the research or the theory behind it, but it's true, right? Self-driving cars are safer. They probably could reduce accidents on the road, but until they can completely eliminate them for whatever reason, socially, as humans, we're not going to accept them.

I don't, yeah. So someone else is going to have to come in and tell us why.

Michael: Yeah. So the next podcast psychologist come in and we'll just focus on this, this one thing. It does seem like there's something about the control factor of it where it's like, well, like, as long as like, I'm the one, yeah. Like controlling the vehicle. Like if what we don't like is the idea of just like, it can be completely out of our control or for no fault of the part of our own, like we get in a crash where maybe you can make a case that's like, well, someone gets in a crash on their own, like they kind of did it to themselves.

Hazel: Is it an element of blame as well? You know, is it the, is it the concept of blame? Like, so if a human is driving a car and strikes and kills a person, we blame this person and we have the recourse or the justice system, but if a fully automated car is driving and hits someone, is it the fault of whoever.

Michael: Musk. Let's take… let's take him down.

Hazel: Is it Elon himself? Is it the engineers underneath him? Is it whoever signed off that model into production? Is it a combination of the engineers that worked on the software? Is it the mechanics that built the car? Is it the people that were sat in the car that made the choice to make that journey?

I think it's, if there's no blame, I think that there's something around that, that makes us uncomfortable as humans, I think.

Michael: Yeah, I think you're right. And gosh, Hazel, I feel like I could talk to you for like five hours and just like wax poetic about philosophy and about all this stuff. Like, I mean, I just, I feel myself. Like being pulled into, like discussing free will and the topic of self and like, you know, we just had a great conversation we're about to wrap up, but, I'm feeling at some point I would love to connect more and go deeper on some of the things that we've introduced here.

And then we can both look silly, you know, 10 years from now when everyone's moved on from AI to the next big thing.

Hazel: In hundred years time and they're like, look at this, look at this woman over here. What was she talking about? But yeah, I think we have the data we have now and I'm sort of generally fascinated by all our elements and facets of it but I've absolutely loved talking today Michael. What fun?

Michael: Absolutely. Well, Hazel again, thank you so much for taking the time to be here today and to share some of your lessons and wisdom. And I especially appreciate just, you know, when there's someone like you that has accomplished so much amazing transformation that you've created in this industry and in different industries, I always think that there's a lot to learn just from, like, your mindset and the way that you view life, the things that you approach.

And so some of the foundational principles that we kind of talked through today, I hope that people are listening to this, found it valuable in terms of their own thinking. And for anyone that's, listening to this, watch this right now, that is interested in connecting more or going deeper and learning a little bit more about your, what you're currently focused on.

Yep. Where can they go to dive deeper?

Hazel: Oh, definitely. Well, there's always Musiio dot com. M u s i i o dot com which still has the AI demos from the company I built and various products and case studies that you can read all about what my company did and what we built. And then on a more personal level, I'm super active on LinkedIn. I'm always posting things that I find interesting and things on there.

So I think if you just look me up Hazel Savage on LinkedIn, I don't think there's too many of us.

Michael: Awesome. Well, like always, we'll put all the links in the show notes for easy access and Hazel, thank you again for taking the time to be here today.

Hazel: Thanks, Michael.