Incongruent

AI And The Future Of Music Education: John von Seggern

The Incongruables Season 5 Episode 14

Spill the tea - we want to hear from you!

A world-touring jazz bassist turned educator and AI builder joins us to explore what happens when smart tools meet hard-won craft. We dig into how Futureproof Music School blends a curriculum-aware chatbot with real mentors so producers learn faster, pay less, and still develop their own voice rather than a template sound. From Wembley Arena stories to DAW specifics, John breaks down what large models already understand, where proprietary production knowledge still wins, and why structure matters more than infinite answers.

We take you inside Kadence, a memory-based AI co-pilot that analyses mixes, compares references, and serves targeted, actionable feedback instead of overwhelming students with lists. Think fewer rabbit holes, more progress: clear mix notes, arrangement guidance, and strategic nudges that build week over week. We also get honest about what students actually want from AI right now—help with marketing, release planning, and social consistency—so the music doesn’t drown under admin. The throughline is creativity as curation: your taste decides what ships, even if AI offers a hundred options.

We tackle the big questions too. Can detectors keep up as artefacts vanish? What counts as responsible use when training data is opaque? John argues for consent and compensation, drawing a sharp line between experimentation and high-stakes commercial work while legal frameworks mature. Looking forward, we preview screen-aware and voice-first coaching that can see your DAW and fix problems in context, compressing learning curves without flattening style. If you care about music production, AI ethics, and building a career that sounds like you, this conversation brings clarity, nuance, and practical steps.

Enjoyed the episode? Follow and share with a friend, and leave a quick review to help more producers find the show.

Support the show

Stephen King:

Hello everyone! We are having a really, really, really exciting time. We're speaking to so many amazing people. And today, returning is Imnah.

Imnah:

Hi everybody. Clearly we've been talking to so many incredible people that everything is all just one big AI blur at this point.

Stephen King:

It is. And this is a musical blur. So who did we speak to today?

Imnah:

We spoke John Von Seggern, whose name is said exactly the way it looks like. And he is the CEO and founder of Futureproof Music School. And we got to have a really interesting talk with him about how AI is kind of enhancing different aspects of his music school. And also about the things that AI can't do when it comes to music, like takeover musicians just like because there will be a fundamental aspect of uniqueness and creativity in work that's done in any form of the creative art that just cannot be replaced.

Stephen King:

And we go through a whole lot of things related to education as well. So if you are all ready, we are ready to go. Here we go.

Imnah:

Everyone say hello to John Von Seggern. He is the CEO and founder of Future Proof Music School. And uh yeah, we're really excited to have you on here, John.

John von Seggern:

Yeah, thanks. I'm I'm really glad to be here and talk about AI.

Imnah:

Amazing. So first off, I think let's tell people a little bit about who you are, your background. And so as we understand, you've got a background in music, in education, and also now in AI. How does that happen to one person?

John von Seggern:

Well, if you're a musician or an artist, I think you and you have a long career. I think you end up going through a lot of iterations and incarnations of what you first started to do. So I was, I am originally an acoustic jazz bass player. That's what I trained to do in school. I went to jazz school in New York. I then moved to Japan to pursue my early jazz career for several years. And I was playing a lot in Japan, but I realized at some point that jazz musicians don't get paid very much. So then I became open to other kinds of musical employment. And uh as fate would have it, I was hired by one of the biggest top stars in Asia from Hong Kong. His name is Jackie Chung, Chung, uh Chungho Kiao in Cantonese. So I signed on with him for a year and toured all over the world. We played like a hundred concerts, and these were like giant concerts. Like we played at I've played at Wembley Arena and Madison Square Garden and other giant venues with with different Chinese stars. After playing with him for a year, I stayed in Hong Kong for another four or five years, and I ended up uh eventually I'll try to try to make this short, but my career's pretty pretty all over the place. Um I ended up doing a graduate degree in Hong Kong on the effect of the internet on music, specifically on musical styles, actually. And then I moved back to the US and I ended up working in I moved to LA and then I ended up working in music technology. I've worked for a couple software companies, including the German company Native Instruments, which is very well known. And then about 15 years ago, I got into education, and since then I have run online programs in electronic music production specifically, ever since then. And I've been at a few different schools and last year I left my last institution and started my own school with a partner.

Imnah:

Wow, that's fantastic.

John von Seggern:

Just to say uh my career in education has been my greatest interest has been adopting new technologies to help people learn better and faster and easier. And so my interest in AI was just an extension of that. It was just like, oh, this is the next thing we can use to help people learn better. So that's that's how I got into that.

Imnah:

Perfect. So it actually seems like you've lived a very full life so far. So you've done so many different things. How did Future Proof come about? What was kind of the seed of thought that was in your head at the time?

John von Seggern:

Sure. Well, um, I'm always trying to think how to do things better and solve problems that come up. And so Future Proof was really the outcome of thinking about my prior program and how it could be better and reach more people. Um, I I was working at a school here in LA called Icon Collective, and my online program was very highly rated and successful with the students, but it was quite expensive. And so the biggest problems I saw were uh we well, for one thing, having it be so expensive meant that a lot of people couldn't come, and often the best artists and musicians don't have a lot of money, so I felt like some of the best people couldn't come. And then also, uh, like most schools, we were offering a year and a half set program that everyone would do. So I became very conscious that if you're somebody that already knows quite a bit and is maybe trying to make your career, you're never gonna think about going to a school like that. That's for people that are kind of starting from the beginning or close to the beginning. And so that was a bummer to me. We weren't attracting the best students and the ones that we really wanted to work with. And so when AI became available to us to use, I mean, kind of in the wake of Chat GPT, I as I got into it further and learned more about it, I realized we could be using this to for part of the educational process, and by doing so, we could potentially make it a lot cheaper and bring high-quality education to a lot more people. And so our basic concept is uh we want to meet people where they are and take them where they want to go. And we're using AI to facilitate that. Um AI is only part of the picture, though. We believe very strongly. I believe that, especially in the arts, you need to learn from a master artist at some point. Once you've got kind of the basic skills under your belt, you need somebody, you need to learn from somebody who's done it and has had to make those kind of decisions about their career and their life and their artistry. And so we're trying to use L AI more at the lower levels to help people, and then we're also matching them with mentors to help guide them. So it's kind of an AI human hybrid model, right?

Imnah:

So they still oh go ahead, Steve.

Stephen King:

Oh, sorry, I was sorry, I'm not talking about you. We tend to do that, don't we? Um, so I I wanted to talk about the the technology that you're using because I'm aware of voice recognition and voice synthesis, right, in terms of AI. Or their what what what services are you relying on for your music teaching, your music education?

John von Seggern:

Um, well, I'm using the I'm using the same big foundation models that everybody else is using. Mostly using GPT-5 and Gemini. Uh, we use Gemini specifically because it's the best for musical analysis. And I've built a chatbot that is integrated with our curriculum. My original model was the Khan Academy AI co-pilot. They call ConMigo. It kind of you're studying something and then it pops up and it helps you understand it in a different way, or you can ask more questions, or you can kind of go deeper into it if you want to. So that's what I've built. And it's also got a very deep memory about each person, and it keeps track of its conversations with you and the music you've submitted and what you need to work on, and it is like resurfacing those things as you go along.

Stephen King:

So um what does it look like? So, so for example, I am composing or I am learning to play something on my saxophone, uh, and I record it, and then do I upload it to the to the AI, and then the AI will tell me the tonality or give me some advice on that. Or how does it work?

John von Seggern:

It can, yeah, it can do exactly that. That's but that's only one of the functions. Um a lot of the times it might just be understanding some concept better, or I I've programmed it to if somebody has a question about something in the curriculum, it has a few different educational strategies it'll try. Like one could be Socratic questioning, then it just keeps asking you questions. I I don't think it's that useful to just having a bot that you can ask questions to and it tells you answers is only of limited use in education. You have to your responsibility as an educator is to help guide people to learning and knowledge, and you need to have it structured in a way where the chatbot is part of the overall experience and not just like an extra thing hanging out that you can ask questions to. It's easier to say that than to do it. It's a long process, so um a lot of it actually the the most important thing to think about is we all have these giant AI models available, but what comes out is totally dependent on what goes in to it. And so it's a combination of the instructions that we're giving it and the knowledge that it has about the student, and then the knowledge that's drawing from the curriculum that we designed.

Stephen King:

So, in terms of the knowledge that you've put into this, uh have you c created a music uh database? Have you uh is there a model which is uh which you've uh inputted musical sheet music or or even just the this the the way that things should be played?

John von Seggern:

I mean well, first of all, you gotta understand we're not teaching people how to play the saxophone, it's mostly about electronic music production. So a lot of what we're teaching is software techniques. If we were trying to teach people to play instruments, that would be a whole different challenge, and we're not equipped to do that. Um But as far as as far as what you said about the knowledge we've put into it, I mean you have to realize that there's two sides of that. For one thing, these giant foundation models like GPT-5, they're trained on everything already. So it already knows all about music theory and composition and stuff. We don't have to put that in. What we have found though is the the concepts and knowledge our students really want are the kind of production secrets that like my partner is a dubstep DJ and producer. He's a pretty famous guy. Proto-hype is his artist name. So he knows techniques that the AIs don't know, actually, because there was no place for them to get them from. Those things are in our curriculum and our AI, whatever is in our curriculum, our AI draws from. We haven't really given it a lot of specialized knowledge about music. That's not really necessary, honestly. I do give it some software manuals about the software that our students most commonly know. So it has like a it's better in some cases that it has a single source of truth rather than just looking things up on the internet. But for many areas, the models already know everything. So it's more a question of surfacing the right information at the right time.

Imnah:

Yeah, that's interesting that you put it like that, especially if we go back to think about how you mentioned it's kind of like a human and a hybrid uh model when it comes to your approach of integrating AI. So is there any aspect of the creation and the production that AI is involved in, um, perhaps from the students' side of things or maybe even from faculty side of things?

John von Seggern:

Um, we're developing some curriculum around generative AI music creation, but in general, we're more interested in using AI to support the students in other ways. I think what our students are most interested in now is how they can use AI to help them more with marketing and business and social media posting and this kind of thing. Because I think uh musical creators today, the biggest frustration is there's too many things to do. You have to writing the music is just part of it. You have to create your public persona and you have to post online all the time, and there's just so much to handle. So we're trying to teach the students how they can use AI to support themselves in a variety of ways. Um most of our students aren't that interested in generative AI music creation at this point, actually.

Imnah:

That's interesting. And do you see um perhaps any scope for new innovations kind of changing what that looks like right now?

John von Seggern:

You mean on the creation side?

Imnah:

Yeah.

John von Seggern:

Yeah, I do. I think that that will be it will become one of the tools that people have to make music. And at at this point, uh I would say AI music creation is widely hated and opposed by a lot of people, but I do expect that to change. I think it'll be normalized and everyone will get used to it before long. I haven't spent that much time. Go ahead.

Imnah:

Um, I was gonna ask if you think that there's a potential that AI would altogether just replace musicians. Like one day we wouldn't have Steve sitting on the other side of a call playing the saxophone. We would just have AI Steve.

John von Seggern:

I don't think that will happen personally, and there's a couple reasons. One reason is the results you get in art are in large part because of the process that you went through. So you could use sam-I mean, leaving AI out of it, you could always just get samples of saxophone solos for your music now. But if you had Steve recorded, it would be something special and sound like something that nobody else has. I always encourage our students if they can sing or play any instrument or do anything, they should always try to include that in their music because it gives them a flavor that other people don't have. Um also I think it's important to think, let's say and we're we're getting there very quickly. Let's say we're in a world where you can generate a pretty good song by hitting a button. Okay. But that raises the question then, okay, if I can make one song in a minute, I could make 10 songs in 10 minutes, or 100 songs in a hundred minutes, and then which songs am I gonna release, or which ones are better than other ones? Somebody still has to decide that. So even if we are in a world where everyone is generating everything with AI, there is still the human creative vision that you're using to decide which thing do I want to put out in the world and put my name on it. You know what I mean? Even if you didn't do anything with it, I think that's really, really important. And I think electronic musicians, especially, have kind of been working like that all along because the approach for many artists has always been it's not so much that you hear something in your head and you try to do it, although sometimes that is the case, but much more often you're just experimenting with things in the studio to see what happens. You know, what if I plug this synthesizer backwards into this thing? Wow, that's really cool, and then you make a bunch of sounds and then you come back and you realize, oh, that little bit would be great to build a track around. So what we believe at future proof is ultimately creative vision and taste will become much more important. And um whether people are generating all the music or not, they're gonna be using tools to help them do things that are more the grunt work side. Like, like if I could have a, and there are things like this already, if I could have a software that helps me mix my track better, I'd like to use that. I might not agree with everything, I might still work with it on my own, but if I could get halfway with AI, I'd love that. And then I could work more on uh making the actual music, you know.

Stephen King:

Just if they if someone is using some AI to create some music, we see we get integrity issues, uh whether it's with advertising or whether with film or whether with its text. Now, is there any way do you know that you could check if someone is using AI generated music? Or because if in the electronic music scene everything's electronic anyway, right? So is there a fingerprint that you can detect from this, or is that something that is just not required right now?

John von Seggern:

Well, that's an interesting question. And there are, yes, there are detectors, just like there's AI text detectors. I think the music detectors probably work a little bit better actually because uh the generators leave detectable artifacts in the music that a computer can detect. And there's a couple platforms. Uh Deezer is a it's a competitor of Spotify, and they do have a uh I don't use Deezer, but they they are able to detect all the AI music and show you whether it was made by AI or not. I think the the tricky bit is gonna be I expect more people in the future are gonna be using AI to do part of the music. Maybe they just make the drums and then they do everything else. And I think it's gonna get more and more tricky for people to distinguish what was made by AI and to what extent. And I also think it will improve. I mean, eventually I don't think the detectors will work. I already think with text, like there's all these, like I'm always reading in education about students who said, I wrote this essay and handed it in, and now my professor says I wrote it with AI and I failed. You know, and with text, it I mean, sometimes you can tell, and sometimes it's obvious, but a lot of times it isn't. I know how to use AI myself to make text that doesn't sound like ChatGPT.

Stephen King:

So that is always the case of the false positive, which is going to dramatically affect a student's life.

John von Seggern:

Uh I do think sorry, but in music, there is the question of obviously copyright and intellectual property. And so right now, if you were working on a big Hollywood production or something, or actually pretty much any commercial kind of job, like you probably wouldn't want to use any of these AI generators because the copyright situation is kind of unresolved right now. I do expect that that will be resolved. The biggest question right now is would it be technically possible for AI generators to determine what the influences were that were drawn on for particular thing and then pay some kind of royalty model? And that's I suspect, or I highly suspect that's what the major labels and the AI music companies are negotiating right now. But we'll see how that works out. But I I expect they'll make some kind of deal and money will flow.

Stephen King:

Money will always unlock the doors, right? Um you have this fantastic chatbot due to Kadence, um, which it again, other literature says that these chatbots are really beneficial to students. Uh, how long have you been offering Kadence and have you got feedback? Has there been any uh evaluation on its effectiveness as a teaching buddy?

John von Seggern:

Um we're I've been working on it for about a year. We are a relatively new school. We've only officially launched in May, so I can't say that I have a lot of data about how much Kadence is helping. I have a lot of data about how much chatbots have helped other educational institutions. Um but also part of the question is uh we're on this technological curve of advancement with AI, and so as an AI builder, you have to you have to think like what can I build today and in the next six months and in the coming few years? And those will often have different answers because like you mentioned the voice spot before. I have built a voice version of Kadence that's on our website actually, and it can see your screen and discuss with you. It the the voice technology works great, actually, but the screen sharing technology, which is it's something made by Google, it doesn't quite work well enough for the applications I want, but it's really close. It can't quite read the fine print on the screen, actually. That's the main problem. But uh any day now, Google will release an update that's like, okay, now this works better. And then that will enable that technology will leapfrog a lot of the things I'm doing now because once the AI can see what you're doing on the screen, then you don't have to explain it anymore, and it can be a lot more helpful, and you don't have to feed data in the back so much because it can just see you and help you.

Stephen King:

So I excuse me, because I'm not that familiar with electronic uh music production. I can only I do sheet music and classical notation. Um correct me if this question is wrong, and you can hopefully translate it into your own into your own way. But I'm assuming if the the the AI browser, which they've now created, can see what you're writing or what you're composing, it will then be able to suggest better uh better better uh chords, better, better ways of of of of of uh of writing a particular piece of music. So would there be co-authored music potentially?

Imnah:

Kind of like grammarly, but for music.

Stephen King:

That's a very good explain.

John von Seggern:

Yeah, there's a few music is different in a few ways. Um it's really hard at this point to process music in real time. You kind of have to make a mix and then give it to it, and then it can and also it needs to to do a good job of analyzing the music, it needs to have kind of the whole thing. It needs to have the context of what's happening to figure out what kind of music are you trying to make and what happens in the arrangement and these kind of things. Um but I also think is it really right to say better, or is it I one of the ways that I use AI in non-musical things is for brainstorming and coming up with new ideas because I can ask it, you know, come up with a hundred new ideas for this, and then I'll just look through them all, and most of them maybe are bad. But number 93, oh, that was great. I never would have thought of that, you know. But I'm still the one picking what to do. It's just like offering me possibilities.

Imnah:

So do you think that the inclusion of AI kind of makes this process somehow faster? Streams line tree streamlines it maybe for your students. Maybe think about it, I guess, in terms of being different from your contemporaries. Like, does AI actually give you and your students a boost in that regard?

John von Seggern:

Um, well, on the education side, it definitely does. As far as creating the music, I think potentially it will. But uh I think what producers would like to have is tools that help them with parts of the process that are grunt work or dru drudgery now. Uh, like for example, we've all been using samples for a lot of things for years. Until a few years ago, you would have been just like looking through samples, like, no, not this one, not this one, not this one, not this one. And now better tools have been developed that will kind of help you narrow the field. Or, for example, you can find drum and percussion loops where the timing lines up already. They might be all different styles, but it can see other timing. Any of these would work, and then you're kind of at least you're in the ballpark. So I think producers would like things to help them. I think producers and artists are less interested in something that does the whole work for them because then they have less investment in it and it's less personal and other reasons.

Imnah:

Yeah, I think I see it similarly as a parallel to even as we talked about content, right? It's completely different when there's a person sitting behind a computer putting their thoughts in um and inputting kind of what turns into an entirely different creative output than what you would find from, say, ChatGPT, if you had then asked it to do the same thing. It's something about our lived experiences and our biases and even imperfections really um that um make those kinds of things unique. And I feel like that would be quite similar for music. Would you agree?

John von Seggern:

Yeah, for sure. And I I should say too, part of my experience running these programs for years is that if your courses and your curriculum are well designed, you can see the students learn and improve. That was my experience in my last program. So I'd see people come in, didn't really know what they were doing. Gradually, as I heard their music, it would get better in terms of it would sound cleaner, better organized, more like their references or their idols or whatever. But I became aware that the biggest problem was that um as an artist, it's not that useful to just become the 500th person to sound like your hero. Nobody really cares about that. If you want to make an impact as an artist, you've got to be doing something that other people aren't doing. And AI could help or hinder with that, depending on how you're using it. You know, if I just go to AI and I'm like, make a techno dance track, it'll just make some generic thing that doesn't sound special at all.

Stephen King:

Right?

John von Seggern:

That that that's the fear that there's but if I spend all day generating things and taking them apart and trying to find, oh, where was like like I have made quite a few things with I have generated quite a bit of AI music that is quite unique and different than anything I would have been able to make any other way. And that interests me a lot. It's still not quite at the level where it sounds good enough for me. That's kind of what's holding me back. But um, I definitely think it could lead to a lot of new creative directions that we haven't tried before.

Stephen King:

I'm just starting to think in my head, and this is a crazy idea, but we we have the large language models, but surely there could be a large music model, something which is specialized purely for music creators, because music's a different language in its own right, it's got it's got its own phenoms, whatever whatever this thing is that they use. Uh there should be a way of building a transform, and I'm just going through some of these technology, building a transform which could understand the context of what music is you're trying to play if it isn't already existing, and then you'd be able to build different services. You mentioned drums, so we we would want to have a whole series of uh drums that we might be want to be able to apply. Uh there's you know there's different uh different uh tempos that you you you you have to keep uh with uh with this chords with the guitar, so you could differ different things that you can do with that. Um I'm just wondering, is is that something that you s you've even thought about, or is that something I'm just fantasizing I'm going crazy about? Just so having a you know a Claude, but Claude with music and only for musicians. And does that sound like something?

John von Seggern:

Which is I mean, yeah, there are things like that. There's there's a lot of work in that area. The biggest company is Suno AI. I don't know if you've heard of them, but yeah, they have a large music model, that's what it is. It's a music generation model that's been trained on uh a lot of music that they're not willing to say what it was, but are currently negotiating with the major labels to gain the right to use it. But also, I mean the big foundation models like ChatGPT and Gemini are trained on music. In fact, ChatGPT 5 is not, but GPT-4 was, and Gemini Pro is. And so I have a the way Kadence works is I'll take a student's mix and we send it to Gemini, and then it makes an incredibly detailed long report about the music, including transcribing the lyrics, which is kind of fun. But then it doesn't just shoot that back at the student because as an educator, I know the way for the way to help people learn things is not to draw. Drop huge amounts of information on them, you have to kind of have a structure where you're feeding the right thing to the person at the right time so that they can learn. I've always told my teachers, you know, when you're giving students feedback on their music, don't tell them 10 things. Just try to tell them a couple things. If you can tell them one thing that they actually remember and then they do, like that's huge. If you tell them 10 things, they're likely to forget all of them. So cadence is taking this report on the music, and then it's looking at what is this person trying to do, what has their music sounded like in the past, what are their strengths and weaknesses, and then it'll try to tell you something useful and actionable that you could do.

Stephen King:

That's interesting. And then I will let you go in a minute. I just say it's beautiful that I'm just again in my imagination, because you've got this record of this their journey. We've just seen the Bruce Sprinstein movie that's coming out. Uh you've probably seen it already.

John von Seggern:

I haven't seen it actually, but I know what you mean, yeah.

Stephen King:

You know, the in the future there could well be uh an AI integration into one of these documentaries because you will have that element uh from working with your school anyway. So that was just what I had in mind because I saw it and I intend to go see it later. Imner, sorry, you can you go ahead.

Imnah:

I was just gonna ask, um, in thinking about how we're using AI increasingly, um, how do we kind of avoid this one size fits all music? So, how would creators, how would producers, songwriters use AI in a way that actually supports their personal style rather than take away from it? And then we all kind of have basically like a fast fashion version of the industry.

John von Seggern:

Well, I I mean, to be fair, like, yeah, that will be part of the industry. It already is, honestly. Uh, the kind of the fast fashion model, that's funny. But um I think it you know, we talk about this all the time at the school. I think it comes back also to you use different tools to make music and you get different results and different sounds from each one. I find when I use Suno AI, even though it has it can make you know a very wide range of music, but it does have all kind of a similar sound to it as though it was all made on the same piano or something like this, which is another reason why I would lean against using it for my own music. I'd rather use some crazy process that nobody ever thought of before. Um I should say I'm more of an experimental musician myself. And before COVID, I was playing with uh really one of the pioneers of electronic music. His name is John Hassel. Unfortunately, he died during COVID. But um I was recording with him for four or five years, and every weekend we would get together and we would try to do something that we never did before every time. And a lot of the times it would be something good, but but sometimes it would, and we would find some wow, that's just really new and cool, and nobody ever did that before. Let's make a song around that. And uh AI may be one of the ways that people do that in the future, but I don't think it'll ever be the only way.

Stephen King:

We're coming towards the end now, I think, or one question here, or which I'm gonna split to two. Uh, it's if you could set one rule for the responsible use of AI in music, or since it's education, what would you say for AI and music education? Uh either or or both. What would what rules would you you play for responsible uh use of AI integration?

John von Seggern:

Well, I think the biggest the biggest debate and question now is about the copyright and intellectual provenance. I don't want to use tools where we're stealing from people or uh where the original creator is not being remunerated in some way, but um I think that that will happen. I think it's just a matter of time. I don't know how it'll be worked out, but um I previously I was in a previous in a previous incarnation, I was one of the earliest laptop DJs using computer files to DJ. That's what I transitioned into after being a bass player for my previous career. And it was very similar to now. There was a lot of controversy about copyright of the MP3 files and where are they coming from and who gets paid for them. And it even affected me. Like there were like other DJs didn't like me because I was using a computer. I never thought of that. I was like, wow, this is so cool, we're using the computer, and then these other guys that spent their whole lives playing vinyl records, they're like, no, that's not fair. But in a couple years, it all got normalized, and before long everybody was using computers, and now it you hardly ever see somebody playing records because there's a lot of damages using computer. So I expect the same thing will happen this time. But as far as rules, yeah, I would I would say right now I wouldn't try to make my album with Suno because they haven't really worked that out yet. And I'd like to see money going back to whoever created the ideas in the first place.

Stephen King:

That's amazing. I I think we're just at the end of time here. Ibna, would you like to close us down? Thank you very much, John.

Imnah:

Yeah, again, thank you, John. I feel like that was a really um insightful conversation, actually. I think it would also challenge listeners to think more about the authenticity of an individual that they bring to any creative piece of work, um, especially in music, and how that's going to altogether be different from um using AI to enhance parts of it and then using AI altogether to just create it from scratch. So, yeah, some great food for thought here. So um thanks for talking to us, John. Um, and uh yeah.

John von Seggern:

Yeah, thanks a lot for having us.

Imnah:

I don't know where else to go from here.

John von Seggern:

It's fun, fun talking about this stuff with you.

Stephen King:

And if everyone listening at home would like to like, comment, or follow. Uh, what's the social media or what's the tag they should follow for future proofs? Future proof music.

John von Seggern:

Uh just look up Future Proof Music School. We're on all the major platforms.

Stephen King:

Super. And on that note, you could also support us if you so desire, put the links below. Thank you everyone, and goodbye.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.