Episode 258: Mike Butera: Why AI Might Redefine What It Means to Be Creative
LISTEN TO THE EPISODE:
Scroll down for resources and transcript:
Mike Butera, the visionary founder of Artiphon, has redefined how we create and experience music. A musician and technologist, Mike combines his expertise in sound design and human-computer interaction to craft adaptive musical instruments that empower creativity for everyone. With a passion for blending art and technology, he is at the forefront of the music-tech revolution, inspiring artists and innovators alike.
In this episode, Michael and Mike delve into the intersection of music, technology, and AI, exploring the future of creativity and the challenges musicians face in an ever-evolving world.
Key Takeaways:
Discover how adaptive musical instruments are democratizing music creation.
Explore the evolving role of AI in transforming music, from immersive experiences to human collaboration.
Uncover the future of creativity, where scarcity, consciousness, and interactivity redefine artistic expression.
free resources:
Tune into the live podcast & join the ModernMusician community
Apply for a free Artist Breakthrough Session with our team
Learn more about Mike Butera and his work at Artiphon:
Transcript:
Michael Walker: YEAAAH! All right. I'm excited to be here with my new friend, Mike Butera. He founded a company called Artiphon, which is an award-winning smart instruments platform that has partnered with Warner Music and Snap for cutting-edge AR music experiences. He has a PhD in sound studies, is a former professor, and is a thought leader exploring the intersection of technology, culture, and human expression.
We met for a conversation a few weeks ago and immediately dove way below the surface. We got into a really interesting conversation that usually takes me five to ten conversations to reach. So, I can imagine the conversation we're going to have here today will be philosophical and zoom out to discuss the intersection of technology and humanity, as well as the future of AI and music. I'm really excited to have him on the podcast today.
Currently, Mike is an advisor to emerging tech startups. He's a musician, a creative, and today we're going to talk about the future of music—creativity, technology, and community, and how they all come together. Mike, thank you so much for taking the time to be here.
Mike Butera: Yeah, this is super fun. It’s good to chat again.
Michael: Absolutely. To kick things off, for anyone meeting you for the first time, could you share a brief introduction to your story and how you founded Artiphon and found yourself where you are today?
Mike: I’ve been a lifelong musician. I picked up the violin at eight, went to college for music performance in Nashville, and realized pretty quickly that I was interested in much more than just playing notes.
I got into studios and discovered how much fun it was producing music. I learned more about the music business and then discovered philosophy, sociology, and cultural theory at a meta level within the industry. That drew me in, and I went to grad school in Virginia, where I got a PhD in sound studies. Sound studies is essentially about how technology affects how we perceive sound and how we shape the world to sound the way we want. It’s a theoretical and fun approach.
At that point, I just wanted to be a professor. That was the goal—teaching big theories about music while playing on the side. I was a professor for about six years when a friend suggested we start our own product design firm. The idea blew me away. I always loved products and design, but I hadn’t considered that path before.
We started a design firm in Nashville called Salt. We helped companies design products across a broad spectrum—Bluetooth speakers, collaborations with musicians, even little things like tools to prevent Earpod cables from knotting. It gave me the confidence and awareness to understand how things are designed, manufactured, and sold.
In 2011, I came up with the idea for Artiphon. The principle was to design musical instruments and interactive experiences that adapt to people. Instruments that could do more than just one thing and cater to diverse musical needs. Over a dozen years ago, this was a pretty unusual idea. GarageBand was gaining traction, the iPad had just launched, and the mobile music revolution was in its infancy. I felt like it was going to change everything.
So, I started Artiphon. Over the years, we launched multiple instruments, mostly through Kickstarter, building grassroots support for this idea. It was a great experience.
Last year, I decided to venture out on my own. Now I’m advising other companies and working on some fun projects.
Michael: That’s cool, man. It’s such an interesting intersection you’re in—philosophy, sociology, technology, and music. I can imagine putting your heart and soul into a company for 12 years must make this an interesting year for you. It seems like a time of freedom to explore what’s next.
A lot of the topics we’ll discuss will stem from that perspective—looking toward the future and what’s to come.
Mike: Yeah, it’s a chance to reset. Whether it’s a business or music, you get into a groove and build routines. But if you want to think or play in new ways, you have to unwind that muscle memory. So, yeah, it’s been a fun year for that.
Michael: Cool. It sounds like a journey of letting go and discovering what happens next.
Speaking of evolution and facing the future, I’d love to hear your perspective. We’re at a unique point in history. Just in the last two years, it’s crazy to think that it was only about two years ago when ChatGPT dropped. Is that right?
It was right around this time, two years ago. And just, oh my gosh, the rate of technological innovation has exploded and doesn’t seem to be slowing down. It’s like we’re on this exponential curve.
I’m curious what you think about where we are right now. A lot of the people listening to this are independent musicians who are trying to figure out how to build a business and a career out of their passion, their artwork, and their music.
What do you think are some of the biggest challenges we’re experiencing right now as a landscape? How do you imagine we can approach these challenges? And then, I’d love to explore where things are going and how we can best overcome them.
Mike: That's a good spot. One of the perspective things to consider is that, at least at a theoretical or futurist level, what we're going through right now has been talked about for decades. It's been over 60 years since people started really talking about AI and its transformative effects. Then you have more recent thinkers like Nick Bostrom, Ray Kurzweil, and others who have been pretty accurately predicting how this rate of change was going to go.
So, for anyone out there who's kind of blown away by it, there are experts who have been studying this and are in a really good position to give us context for what's possible. There are utopians and dystopians and all that. But my general view on this is, like we were talking about, unlearning in a way.
A lot of us were brought up in more linear, analog—not necessarily literally analog—but in terms of approaching music, for instance, you had to perform it. You had to learn an instrument. You had to choose an instrument. You had to afford to play the instrument. Then you had to learn it and get good enough—or at least confident enough—that you had something to say. Even then, you were probably psychologically blocked from sharing it.
All of that's changing so quickly that, for people who were brought up in that world, this all seems like cheating and shortcuts. Even music production tools built into Logic and Ableton seem like it's not "real" music.
My experience with Artiphon was, especially in the early days, we were making these smart digital instruments that could play any sound. It could be a whole orchestra or all kinds of stuff. And I was doing it in Nashville, which is one of the most conservative musical towns anywhere. If it's not a Tele through a Tweed, it's not a "real" guitar.
But the opportunity to bring so many new people into that, or to change your process and relearn what you can do musically, for me outweighed the puritan, traditional mode—which I love personally. I still play acoustic and analog instruments. But it is a major challenge to even just be open to these new forms.
So, I guess the arc there is knowing that there are rubrics or theories, which we can talk about, for understanding why these changes are happening so quickly. Probably the biggest challenge, I think, for people is that muscle memory or the psychological approach to novelty. Because, the fact is, unlike challenges before—like how do you lower noise, how do you get good room tone, how do you afford a great-sounding piano—those challenges are evaporating.
All the stuff we used to talk about that was hard isn't hard anymore. So now, the hard thing is probably in our heads.
Michael: That's so interesting. Yeah, it sounds like part of one of the biggest challenges you're speaking to is around our sense of identity. If we get really good at doing something, and we've worked for decades to master this skill, and then there's this technology that makes it easy for anyone to do it, there's a part of our identity that feels lost. We resist that because if we feel like that's who we are...
What really came to mind is some classic analogies. Right now, they're being used to describe AI as a tool versus replacing people. Back in the day, before we had calculators, people could get really good at doing math. They might have math competitions. Now, it's sort of like the most intelligent person doing mental math just wouldn't be able to compete with someone holding a calculator.
Mike: Or chess, or Go. I mean, every day there's something new, right?
Michael: Yeah, I know. I've heard about that one a lot too recently. Go—I never played it—but it sounds like that was a landmark historical achievement when AlphaGo beat the world's best chess or Go player.
I'm curious how you view AI as a tool, and this idea or fear of being replaced by AI. I've heard some interesting conversations around the concept that AI—like every technology—creates a revolution. It displaces previous roles, but then we learn how to use the technology as a tool. Ultimately, it's not something that replaces us; it just changes the roles, and we use it as a tool. Do you think the same thing is happening with AI, or do you think there's something fundamentally different about this form of intelligence that might be different from the tools of the past?
Mike: Yeah, I think that's one of the fundamental questions here. Most technologies throughout history were supplanting or automating a human process into something mechanized or somewhat autonomous that could live on its own, even if you had to press the button to make it work. It had an internal process that was mimicking something like what humans used to do.
The idea with AI—and I think we're seeing that's part of the rate of change right now—is that it is improving itself already. The way these AI systems are working, they are sort of recursively creating feedback loops. So, as AI gets better at coding, it's going to get better at coding AIs.
Michael: Also, the human teams that are using AI to improve the AI, it's that same recursive...
Mike: We're feeding it as fast as we can. And so the idea of scarcity—even capitalism itself relies on scarcity, the distribution of resources, labor—all kinds of stuff. A lot of that scarcity is theoretically going away.
There are all kinds of other systems in the world, whether social, cultural, or political, that mean that's not going to be instant. But there's also something called the "AI effect." You could probably apply this to many earlier technologies as well. Let's take GPT. Before two years ago, we would have said, "Ah, there's not really a way to talk with AI," referencing the classic Turing test. Then as soon as we encounter it psychologically, we say, "Yeah, but that wasn't the real challenge."
Even chess, Go, and all this stuff—before the game, it was hyped up. Then the computers won, and we're like, "Yeah, but we'll move on to the next thing."
Michael: Do this yet.
Mike: Yeah, yeah. And so where we're getting to now—when I talk to a lot of people about new AI experiences or advanced voice modes, for instance, on OpenAI's platform—they're highly conversational. We could almost include it in this chat, and it would be quite natural.
But where I'm getting a lot of questions from people now is: "Does AI have emotion? Does it have intention? Can it make art? Does it have originality, incentive, or inventiveness?" All these ideas that we want to keep human.
The question for me—and sort of from a philosophy standpoint—is: what does it mean to be, say, a musician or a creative person? If that's about you expressing yourself—and this gets straight to your question—if the only thing you need to do to be an artist or a musician is to express yourself, AI can never take that away. That could, by definition, only come from you.
So, I don't think there's any concern—if we're defining art in that way—that AI could take our place. Now, that's a neat little box of a definition. If we're talking about the scarcity of, say, the next hit single, or scarcity of ambient music to fill your living room, or scarcity of good mixing and mastering engineers, and things like that, you know, a lot of those things we're already seeing AI do quite well.
It can take over in the economy, in the marketplace, where humans used to dominate. So, I don't think that's straight to your question. That's more of a market question, but it's related if you're trying to make money with music, for instance.
Michael: Yeah, a hundred percent. I love this conversation already so much. It's so interesting. It sounds like part of what you're saying is that there's this reduction in scarcity across the board, but the thing we're truly afraid of at our core is sort of the loss of self.
And so, because the self—like, that's not something that could be lost—therefore, you don't necessarily have to be afraid of that as artists. As long as we're focused on self-expression...
There's a whole rabbit hole I think we could go down just in terms of: what is self? How do we define self? That goes into the roles that we're playing. Or is there something more fundamental? Is it our soul? Or what is the essence of the self that we're expressing?
But also, as you're sharing that, one interesting question came up, which is around scarcity and the reduction of scarcity. I'm curious, in this world that's approaching rapidly where human labor becomes less necessary and we have this reduction of scarcity, most of our capitalism and the market rely on supply and demand.
So, if there's no scarcity of all these different things, and productivity—and abundance—become measurable, do you think there is any legitimate scarcity in that post-AI world? And if so, what is the driver of the scarcity? Because that seems like it will drive the markets and our evolution in general. Where do you think that scarcity comes into play in a post-AI world?
Mike: I love that question because, at this point, we're talking about predictions—we don't know yet. We're not familiar with it yet. The stock markets still behave in a scarce-world mindset, and people still go to school, apply for jobs, and work within a world where things are hard to do. And that includes making good music. I mean, you can press a button to make music with AI, but as of yet, we're not seeing that take over the charts. Now, I think we're a day away from that—it could happen anytime—but for the moment, things feel somewhat normal, if easier.
We are seeing a drastic increase in the number of music creators. The last stat I saw was that, in the past year, music creators have grown by 12%.
Michael: Wow.
Mike: The number of people making music is just amazing, and new technologies are certainly fueling that. Whether it's just your identity or thinking about music as a career, that's both really exciting because it means it's more accessible to people. But from a job market standpoint—not going to sugarcoat it—if you have 12% more people available to do the job, it's harder to get that job.
To predict where the scarcity is going to be, there's a lot of talk right now about how much people should merge with AI. Even if you're writing an email, when is the moment where most people will only know how to write an email alongside an AI writing assistant? I mean, it's already happening, and we're nearing a tipping point.
So, the more that we merge with and rely on it—similar to the resurgence of analog film cameras and other somewhat nostalgic things—there might be a scarce market for purely human-generated content. Humans looking for other humans will become a very specific thing once it’s no longer the default that music is made by humans anymore.
Saying that, it sure sounds like a niche because, by definition, if the majority assume that music is made with humans plus AI, or maybe AI alone, then human-made music will be a subset of that. But technically, that will always be a scarce resource.
I’d love to know what you think about this too because I’ve been thinking about this question a lot recently. It’s like the primary thing on my mind. Logically, I don’t think there are too many things about the process of making music—at least in a traditional sense—that AI won’t be able to do.
Totally different question if humans want it, but in terms of just raw instrumental capability, we’re not seeing those limitations.
Michael: It’s so much fun to talk about this.
Mike: It’s scary, though. I’m thinking about people listening, and both you and I have had, at least part of our careers, making music and being inspired by it—whether actually making money on it or just the possibility of that. The idea that we’re not necessary anymore is really scary.
Michael: Yeah, it’s super interesting. For me, what comes to mind is it kind of comes back to that point you brought up earlier around self-expression. There are things that can’t be lost.
It’d be silly to predict anything with 100% accuracy right now because it’s evolving so quickly. But if I had to put my line in the sand and predict where things are going, I would say that, with the replacement of the necessity of certain labor—basically, the jobs we don’t want to do, the ones we don’t enjoy—I think those will obviously go away when they’re no longer required.
If we can have something like a Tesla Optimus or some other sort of robot replacing those, then, obviously, we’re not going to do those jobs anymore.
I think it’s going to create a vacuum or space where there’s a lot of soul-searching and a loss of identity or meaning. That might lead to a crisis where we, as humans, are kind of like, “Who are we? What are we doing here?”
I think a lot of people will turn to art and music, becoming creators and using these tools to tell stories and share their identities.
Yeah, it’s just hard to fathom right now what we’ll be able to create. I also think there’ll be digital environments and worlds that we create. We might even have something like Neuralink installed, where we can join each other’s worlds and experience these different environments.
Mike: I'm totally with you. Yeah, we’re hitting the point of the conversation where, I don't know, I'm thinking of Independence Day or classic movies where it's like the disaster just happened—what are we going to do about it? And so, if we accept that AI is capable—and technology as a whole is capable—of replacing most things that we could do... I think it was this morning that I saw there’s a new study showing that people prefer AI-generated poetry.
Do you see this? It's hilarious. They're doing these blind tests, and people like AI poetry more. The funniest part about it is they thought the kind of obtuse language within traditional poets were flaws—they thought the poets were AI. And that the reason it was so hard to understand was because AI isn’t really good at writing. So, it’s going to change. The disaster’s happening, but what are we going to do about it? And how are we going to step up?
I think you nailed it. We've been in a consumer culture for a century—or more. The economy has been driven by consumer culture for a while now. And I think we could enter into a much more interactive, creative, collaborative culture where... Let’s say right now, you could go to Netflix and type in a prompt to make a movie. What do you want to watch? Maybe that'll get really boring—to have that much freedom to consume. Maybe it'll be much more interesting in that world to create.
Michael: Oh yeah.
Mike: Than just, like, feed, you know?
Michael: That’s super interesting. I mean, some things that I’ve seen coming out around video games and AI, right? I used to love playing RPGs and video games where you're telling stories. You can make choices, and there's branching logic. Basically, these games were complex story arcs where you get to choose different decisions, and it changes the outcome of the game.
And yeah, just imagining a world where we could have a video game like that, where the people aren't prescripted in terms of dialogue, but their stories... You have more ability to sort of live in this world. And this brings me to this question of simulation theory and the idea that the most likely scenario is that we're not in base reality.
So, I’m curious to hear your thoughts on that as well. Do you think there's a chance that what we're in right now... When we think of simulations, we think of things that are kind of dumb simulations. Like, we think it’s fake or artificial or it’s not real, because usually, we think of, like, Microsoft Sam or something like, "I am a human." But I’m curious about your thoughts in terms of this concept of simulation theory. Is it possible that we’re living in some sort of fractal simulation within another simulation?
Mike: Well, philosophers have a hard time defining consciousness itself. We've been talking about it for many centuries, explicitly. And every time someone proposes something, it just gets shot down—all the time. And yet, we do it every day. We have it. We know how it feels. We know the phenomenology of being conscious.
I see simulation theory as an extension of that kind of challenge. And I don't know that we're going to solve it, but as a philosopher, I love that game—of playing it out, thinking through various theories, or testing them to see if there’s a way through. And it feels like a game. It's like running around Zelda, testing the walls to see where the secret closet is, you know?
Michael: Certainly seems like maybe it’s improvable whether we're in base reality or if there’s a higher, nested thing. But it almost seems guaranteed, if there’s not some sort of extinction-level event, that we’re approaching a point where we can create a nested version of reality below us.
Mike: But I think the reason I bring up consciousness is I feel like we’re doing that. We’re experiencing that every night when we dream. And, getting back to music—the space of music creation, whether solo or with a group—when you are playing... And this ties into that collaborative theme we were just talking about. When you're in that zone, that is the world. I mean, when you look at brain scans of people playing music, it’s taken over the whole deal. Especially if you’re getting some sensory motor stuff involved, it’s amazing how much of the brain that fills up.
So, music is a sort of dream state in a way. That is a subset of all the other things you could be experiencing in the world. But when you create that, to me, it has a lot of the trademarks of its own reality. And so, making that more immersive, making that more interactive, social, collaborative... That’s been my goal in my career as a designer—figuring out what user interfaces match with people's desires and intentions.
Traditionally, it was physics. Thank you. Traditionally, it was like you need a certain instrument to resonate in a certain way to make sound. We don’t have that limitation, so now it gets to, well, what do humans actually want to do? And what user interfaces—like the ChatGPT prompt box—that’s a user interface. And that changed the world two years ago.
VR? Or Neuralink and other brain implants that could actually let us fully live within our own constructed simulations and be fully immersed. Plenty of movies have shown us how this is going to feel. So rather than asking, “Are we the simulation? Are we the result of someone else’s simulation?”—to me, it’s more interesting, because I can’t answer that, to think about how we can create simulations or new realities and live within them.
And that’s when we get to free ourselves from... Back to scarcity—from a lot of the limitations that the natural world has always had. I do love the natural world, though.
Michael: Yeah, I love what you're saying. And I feel like it kind of gets at the root of some of the fear or concern around artificial intelligence or technology in general. It's this tension between what we view as the real world or organic materials versus something fake or artificial, or a simulation that's not real. I like what you're talking about in terms of dreams and how we're dreaming reality. It seems like there are some really interesting connections between dreams themselves, how they work, how our minds work, and how we create things in general. Maybe that's connected to what we're talking about in terms of how we express ourselves or our identity—like you're dreaming things into existence.
Mike: Well, and think about the musicians you love to listen to or watch—not just as artists but as people, as humans. A theme that comes to mind—there are some exceptions, like people who wear motorcycle helmets and things like that—but most of the time, you're able to observe musicians in the flow state, in the zone. Songwriters especially—you can tie songs to specific moments in the artist's life. There's maybe an autobiographical element there, and you get the story of a real person. That's inspiring.
As a musician, I try to think about whether I'm enjoying it, whether I'm in my own flow state, and whether I'm making music I actually like. I've made a lot of music in my life that wasn't for me—it was for other people. And that's great. But for my own music, if I'm making things that I genuinely like, there's a much better chance that other people will be inspired by that.
We've been in this commodified, consumer society for a while, and that got us here. But I hope the future is filled with more people finding their own way to be expressive. That's attractive, that's magnetic—hopefully more so than prepackaged pop music that nobody necessarily even enjoys making but still sells. The market will probably stay fragmented, maybe even extremely fragmented, compared to having just a few superstars at the top. But I’d argue that's going to be a really good thing for musicians and for most people.
Michael: Cool. So, it sounds like what you're saying is that one of the essential things you think will happen in the future is that things will get more interactive in terms of media. Rather than us watching a movie that's been pre-created for us, we'll have more expression and interaction—our own stories molded into these experiences.
That will probably decentralize things, but we'll have a lot more authentic stories being shared, closer to the root of who we are. There's also a question around scarcity. In this post-AI world, where are the things that remain scarce or become more scarce because of this technology?
Right now, scarcity drives human evolution, at least in terms of capitalism and the market. It's about where the need is and where the supply doesn't meet the demand. We've created a system to monetize and address those needs until they're no longer a challenge, and then new needs arise. It's like we've gamified the human experience to solve problems.
Mike: Problems that, in some ways, we just make up.
Michael: Yeah, that’s an interesting point. It kind of breaks the system if you’re literally creating problems just to solve them, which is true in some ways. But generally, in a free market devoid of corruption—which obviously isn’t always the case—the market follows genuine human needs. That drives the evolution of products.
I wonder, in a post-AI world where so many needs have been solved, what becomes scarce. Ray Kurzweil talks about this in The Singularity Is Near. He mentions how everything becomes an information technology. I've also heard people say that information itself could become a point of scarcity—like training data. Even if AI could create its own training data and we could learn from that, there would still be a bottleneck.
It feels like the limiting factor becomes how quickly we can generate new training data and feed it back in. Maybe it's also energy itself—how well we harness it. AI requires a lot of energy and electricity. We’re already building bigger power plants and figuring out how to process energy more efficiently, whether that's from the sun or nuclear fission.
I’m talking way above my pay grade here, but maybe that's where scarcity remains—aligning with resources like energy and compute power because those are still finite.
Mike: This is practical in a different way than what we were discussing earlier about scarcity. When it comes to music production—can I get a good sound, record it, and distribute it?—there's no scarcity there anymore. But now we’re talking about the scarcity of resources more generally, and maybe for the creative process itself.
We mentioned the diffusion or distribution of audiences. Things are going more niche. There’s also a major challenge—maybe it’s already here in the streaming world—of finding your audience. How do you find the people who care?
I’m optimistic about this in general. More people in the world are making music, and more people are enjoying music. But that’s not the traditional model for the music industry. Musicians entering this market with dreams of making it big or becoming a star will need to fundamentally adjust those expectations.
We’re also talking about the instrumental side—can you make music? And the market side—can you make money from music? But lately, I’ve been obsessed with the "why." Why do we do any of this?
I had ChatGPT open recently, and after I was done with a task, I didn’t have anything else to ask it. It felt great, like I didn’t need it. I’d rather take a walk or read a book.
If we reconnect with the reasons why music exists for us—if it’s just to fill space with sound, then sure, AI will excel at that. But music has always meant more than that. If we remind ourselves of all the things music means to us, it’ll open new opportunities for interactive media and for musicians to create music that isn’t the same every time it’s played.
There have been experiments with that over the years, but the technology and infrastructure are still catching up. Even classical composers from hundreds of years ago could have gotten into the idea of music evolving with each performance. Traditional classical compositions were played differently every time because of the performers and the audience.
Only in the last century or so has music become this static, frozen commodity. I think breaking out of that mindset will be really healthy, creating new opportunities for musicians to craft interactive experiences. But it won’t be about recording once and being set for life. Ideally, it’ll become a much more active, ongoing process.
Michael: That's really interesting. I hadn't really thought about it like that. It used to be the case that every single time the music was performed, it would evolve and change based on the room, based on the meeting of the audience, the energy of the room with the performer. It does seem like a lot of the best performers of all time—like, I mean, the Beatles—are famous for this. They performed their music a ton, and back in the day, before you recorded a song, you really worked it out by performing it over and over again, tweaking it, iterating on it through all of those shared experiences, and seeing which parts resonated with the community. So, it's interesting to think that we might take a step back towards that and maybe even make it more interactive from a community standpoint. So, it's not just a function of all the people who are there, part of the room. It's more about co-creation rather than siloed independent creators.
Mike: I mean, recorded music and the packaging of the song or album as an object—a commodity to buy—actually disincentivized or discouraged a lot of people from being musical in ways that we used to do. It used to be more common that people would make music together, sing together, play together. And whether you were a musician or not wasn't this hard-defined thing. It was more casual. It was more social. Then we recorded music, and now we compare ourselves to these perfectly produced recordings. For a lot of people, even musicians, that's just too intimidating. They shut down. It's like, "You can't get up here, you might as well not even try." How many people do you meet who just say, "I don't have a musical bone in my body. I'm not good at this"? It's like, "No, you do. You're human." So, it might be necessary for us to shake up—or this might be a fortuitous moment in history to shake up that industry around creating static objects—and get back to something that's going to be more, I think, more interesting for people: music that breathes with you, that responds to you, and instruments or user interfaces that let you encounter the music in a new way, in an embodied way. I mean, there are so many parallels. I'm thinking of Nike and running, where, you know, casual running through the neighborhood wasn't a normal thing all the time. It took not just Nike, but a sort of movement of casual exercise to normalize it in the mid-20th century. And now people exercise without saying, "I'm eventually going to be a pro athlete." That's not the goal. They just want to enjoy the feeling of it right now. And I think being musical could have a lot of that same shift—from a pro-first industry, back to one where people are more active and interactive. That sounds beautiful to me, and that sounds like a lot of fun.
Michael: Absolutely. Yeah, I couldn't agree more. It seems like the tools that are coming out right now make it so that anyone who has a heartbeat and a creative mind can create music. And we're still in the early stages of it too. But you can create music that's a heck of a lot better than my first song when I started recording on GarageBand. You know, someone with zero experience but who has a great ear for music, someone who absolutely loves music, could easily create a song that's much better than what I could have when I was just getting started with the tools. And honestly, I mean, we're approaching a point where even now, at least if you compare it to the time it takes to return on a product, right? In like 10 seconds, you can generate something that's really, really good. Maybe we can still beat it right now, but it just takes a lot longer. But with humans using the tool, you kind of get the best of both worlds right now. One thing that I'm curious to hear your thoughts on, because it seems like you have an interesting perspective as it relates to these interfaces—the interfaces between humans and music. A lot of your background is in creating natural interfaces that people can express themselves through and turn that into music. One of my favorite interviews that we've done on the podcast was with Nolan Arbaugh, and we released it about a month ago, at the time of recording this. He's the world's first Neuralink patient.
Mike: That chat was fun.
Michael: It was fun. Yeah. I mean, he's awesome. And you know, he, quote-unquote, created a song telepathically using his brain interface. He's paralyzed from the neck down, but he was able to create, in my opinion, a song that's much, much better than my first song was coming out, just using his thoughts. So, I'm curious—do you think that type of technology, that type of interface, is where technology is headed? Most people I talk to about Neuralink are terrified by the idea. Like, "There's no way I would ever get one of these things installed." What are your thoughts? Do you think that eventually, it'll be like a cell phone where everyone has one? And in that world, how do you think that will affect our sense of identity, self, and co-creation?
Mike: I had an experience this past week that might be part of the way there into something that feels almost mind-controlled. There was an update to iPhones that lets you do eye tracking to move a cursor on the screen.
Michael: Wow.
Mike: This is tech that's been around for a while, you know.
Michael: Yeah. I've used it on the Vision Pro, and it's really cool using it, but I didn't know if there was an iPhone version.
Mike: That's where they took the Vision Pro stuff, put it in iOS, and it's in accessibility settings. It's not perfect, but thinking, you know, our eyes just kind of move. We don't quite have the same motor-type thinking with our eyes. And so there's sort of a natural straight into the brain kind of thing.
Michael: Eyes are the window into the soul.
Mike: Sure. So are they the window into your musical soul as well? Maybe. And, you know, the ideal, I think, with an instrument or an interface is that it disappears. It becomes part of your body, a prosthetic in a way. And so, looking at augmented reality in addition to physical interfaces, or these different ways of detecting intention and responding to that with these feedback loops, I don't know that we need electrodes implanted to get there.
I think that's exciting and inevitable, part of our future. But I think a lot of people could experience that right now with smarter interfaces, more adaptive approaches to making music and thinking about the world.
You've seen these — you can connect wires to plants and make music with plants and bananas and stuff. There are so many things out there that are just fun, musical Legos, all that kind of stuff. I think embracing all that's really inspiring, even for trained musicians, to get out of the idea that all you do is play the guitar you learned when you were 10. I think that's inspiring to, straight to your point.
The future of, once that technology gets more mature — sort of brain or intention-controlled rather than just motor-controlled music — I think we, what we were saying a few minutes ago about interactive media versus static media, I think it's going to be that much more obvious that it might feel boring to just passively listen to something when you get used to interacting with it. Once we go there, I don't know that we're going to want to go back to just... I mean, I'm thinking of Clockwork Orange, you know, just being stuck in a world that you have no control over what's happening. I think we're going to expect to have some control even over our idea in the future of what music is.
What is a song? Is it something that is the same three minutes every time, or is it a space that you enter into and you're part of? And is it responding to you? I want to live in that world. And I think that will provide new opportunities for musicians to design those interactive spaces. That isn't just a one-and-done deal.
An artist could release a song that evolves over time and continually has new elements to explore. And that people, whether they're literally walking around the room or they're just thinking, exploring the song by way of neurons. The book Infinite Jest, you know, a huge, crazy novel from the '90s, talks about a technology or a video that is invented that people will not stop watching. It's this crazy story around all that. But the idea of you could be infinitely entertained by media — that we might be developing media that is exciting and engrossing enough. And if it adjusts, it's a bit darker than that.
I'm a bit more optimistic to say no, it's going to be human. It's going to be something that we actually co-create. That sounds great to me. I'm embracing all that, at the same time as I'm not getting rid of any of my traditional instruments. I love them. But I might be building up more of a museum here than the future, with guitars and violins and synths and things.
Michael: That's fun. Yeah. I mean, it's super interesting. It sounds like what you're saying is that the interface through which we express ourselves and explore who we are in the future, if we do have, I mean, just like naturally — like intuition and intention are always kind of the driver of interfaces. Like the point of an interface is to more directly connect your intention with the end result. And right now, like, our interface with technology in the world is like two thumbs, which is sort of like having dial-up internet versus extremely high-speed internet. Whereas if we actually had a better interface, like our eyes, that's a big step in the right direction.
Because it's more intuitive. It's more of like a direct interface. But if we did have the ability to like, literally just think our intentions, it would really speed up the flow of communication. It's potentially with each other and with technology. Yeah.
Mike: That's actually when I designed Orba, one of Artiphon's instruments. The inspiration was the texting modality that we're really good with our thumbs, and that you could design an interface that celebrates that. Thumbs – we just have so much dexterity now that we didn't have a couple of generations ago. People were much better at handwriting than most people today. But looking at how people actually behave and designing instruments or interfaces for everyday life, I love that stuff. Rather than forcing people into traditional modes just because that's the tradition.
Michael: Yeah.
Mike: I think there's a huge opportunity, like you brought up with the Vision Pro, looking at more spatial awareness of our tech and body awareness. That doesn't always have to be a thing to hold, but just responding to your own body. What Matt is doing with the armband with Orion, picking up little twitches and nerve signals and interpreting that – that gets pretty close to the electrodes we're talking about in terms of internal body processing becoming this augmented creativity externally. So, yeah, I'm only seeing good things, but like the beginning of the conversation, it's fundamental change. This is not going to be familiar to the world that we grew up in or the industry at that point. And some of the record labels and streaming and all that – some of the industry is catching up with this, but it remains to be seen if they can move fast enough to change how people want to experience music itself. I hope that they lead it because we need people to lead this, but right now I think it's probably up to musicians to figure out what that feels like.
Michael: Absolutely. Well, man, this has been a really fun conversation. It's very rare that I get to geek out this much and go down a rabbit hole like this. So, Mike, thank you for being on the podcast today. And, you know, certainly, everything we're talking about – it's not hard to imagine many of the risks and challenges, the fears and concerns, which are extremely valid. Especially when it comes to Neuralink – what could go wrong? Like, you know, having someone control your thoughts and your brain, there's obviously a lot of real risks and concerns with these technologies, but I feel like the optimistic side doesn't get its fair share of expression. Because pretty much all forms of mass media that have talked about this technology are about how it is evil, and it's going to destroy us, and take over. We're going to lose our humanity and go extinct. And so I think it's refreshing to be able to have a conversation like this and kind of explore where things are going with a more optimistic tone as well, while also acknowledging we have to be careful about doing this in the right way.
Mike: And adapt to whatever that new economy is. And I think musicians are some of the most adaptive people in the world. We're used to real-time pivoting, playing in jazz, and playing in time. So, I think we'll be fine. I think we're good for this.
Michael: Yeah, humanity has proven to be pretty resilient in terms of being able to adapt to sweeping changes. Like the fact that, what, 85 percent of us used to spend our whole time farming, and now it's less than 10 percent. But, uh, Mike, it's been great having you on the podcast. Uh, I feel like we could continue this conversation for several hours. So, maybe at some point we'll have a chance to do another one. Maybe if you're down here in Orlando, we can have you out here at the in-person studio for a longer conversation. But, uh, Mike, thank you again for being on the podcast today. And for anyone here who's listening to this and is interested in connecting with you more and diving deeper into what you're doing right now, could you share the best way to connect?
Mike: Yeah, just email me. It's just my name, mikebutera@gmail.com.
Michael: Awesome. Alright, Mike. Well, maybe the next time we're talking, we'll both have Neuralinks and we can just, you know, beam our thoughts into each other's heads.
Mike: And we'll have an AI-dictated, externally-generated set of notes. We maybe we'll just make music together.
Michael: Down for that. That sounds more fun.
Mike: Yeah, cool. That was a great time. Thank you.