Subscribe to Science Friday
In recent months, AI music has moved from novelty act into the realm of listenable music. For the first time, AI-generated songs from AI-generated artists are on the Billboard top 100 charts, and more musicians are coming out saying they use AI in their songwriting process.
Is this just another tech upgrade to the music-making process or does it signal something bigger in the industry? To investigate, SciFri producer and musician Dee Peterschmidt talks to journalist Kristin Robinson, who covers AI in the music industry, and Laurie Spiegel, an electronic and algorithmic music pioneer.
Further Reading
- Learn more about Laurie Spiegel’s Music Mouse virtual instrument, a rerelease of her 1986 program where you can drag a mouse and create chords and melodies.
Donate To Science Friday
Invest in quality science journalism by making a donation to Science Friday.
Segment Guests
Kristin Robinson is a senior writer for Billboard, based in Los Angeles, CA.
Laurie Spiegel is a composer and programmer based in New York, NY.
Segment Transcript
FLORA LICHTMAN: I’m Flora Lichtman, and you’re listening to Science Friday.
In recent months, AI music seems to have exited the realm of novelty act and moved into the world of having living, breathing fans. But what’s the impact going to be? Science Friday Producer, musician, Dee Peterschmidt is here to investigate.
DEE PETERSCHMIDT: Hey, Flora. AI music got my radar last year because I kept getting these videos on my algorithm.
(SINGING) Inventor of the road roller was a genius
He made a 20 ton machine create 60 tons of force
The secret is that the roller drum isn’t a solid piece of metal
DEE PETERSCHMIDT: AI-generated songs describing how various pieces of heavy machinery work. Pretty mainstream stuff. But for the first time last year, some AI-generated songs actually got onto the charts, like this one from Xania Monet.
[XANIA MONET, “SAFE IN YOUR HANDS”] So lift your hands up and soften your heart
Let go
DEE PETERSCHMIDT: And some pretty big names in the music industry have gotten pretty vocal about using it, too. Timbaland.
TIMBALAND: It’s like an assistant. When I do a beat and I’m like, yo, how would you take these drums, and rearrange it this way, and I’m like, oh, I would have never heard it that way.
DEE PETERSCHMIDT: So, what’s going on right now at AI music companies like Suno? And is this just another tech upgrade to the music-making process, or is it something else? I wanted to call up one of the journalists I follow on this topic, Kristin Robinson, Senior Writer at Billboard, who covers AI in the music industry.
Hey, Kristin. I was wondering if you also had a moment last year where you were like, oh, this stuff has kind of gotten to another level now?
KRISTIN ROBINSON: I think it was around Xania Monet, who you mentioned in the intro. I think Xania Monet was a real turning point, but you could point to a few different turning points. It just kind of felt like a lot of stuff started happening really fast.
So to back up a little bit. Before Xania Monet, June hits, and I find that there’s this song on TikTok called “A Million Colors” by Vinih Pray.
[VINIH PRAY, “A MILLION COLORS”] In spring morning, the sun rises on the horizon
A million colors
KRISTIN ROBINSON: I was seeing it in TikTok clips with Kylie Jenner doing her makeup to this song. And I realized that the song sounded kind of weird, and it turned out it was an AI song. And it was towards the top of the viral chart on TikTok. And then that just kind of felt like the first domino. And then things just totally got out of hand.
Later that summer, this band called The Velvet Sundown, which was fully AI-generated music and also AI-generated images to correspond with it, really caught fire online. And then Xania Monet. It was in September, she became a big headline for us at Billboard because she signed a, reportedly, multi-million dollar record deal with a traditional music company called Hallwood Media, who’s known for just working with regular artists previous to this point. And I think that a lot of people in the music industry considered Xania Monet’s signing, and the fact that her songs were starting to climb on our gospel charts, as a really big turning point when AI music has suddenly arrived.
DEE PETERSCHMIDT: When we say she signed with this label, who is the person getting the money here?
KRISTIN ROBINSON: OK, that’s a great question. So Xania Monet is the AI-generated avatar and character created by a woman named Telisha “Nikki” Jones. Telisha lives in Mississippi. She considers herself to be a poet, but isn’t someone who knows really how to get to a finished product of a final song.
And so, what they say Xania Monet is like a character for her to express her poetry through song. And so the person signing that deal would be Telisha “Nikki” Jones. And so the royalties would go back to her. And she has a manager, as well.
And what they would probably say is that these AI-generated characters or personas are no different from how Damon Albarn created the Gorillaz, with their little cartoon characters that kind of represented the band. They think that this is a way for artists to maybe express themselves in genres that don’t typically follow what they’re known for. So they would probably say that this leads to more experimentation. And yeah, it’s very interesting. There is a woman behind Xania Monet.
DEE PETERSCHMIDT: Yeah. I mean, can you talk a little bit more about what kind of genres were seeing AI music kind glom on to? We got gospel. We’ve got these kind of weird heavy machinery videos. What other genres are coming up?
KRISTIN ROBINSON: I am seeing a lot in the Gospel, Christian realm. I’m seeing country music. That “A Million Colors” song that I mentioned is more of a doo wop throwback ’50s rock song. I think what I’m really seeing is that it’s going for niche genres that tend to be fairly formulaic.
Country music is not a super complex genre. Of course, it has so much heart to it, and that’s why we all love it. But the chord structure is usually pretty simple. It’s usually a verse, chorus, verse, chorus, bridge, chorus kind of structure. There’s not a ton of experimentation going on in that genre, and the lyrics tend to follow specific tropes. And I think that makes it a little bit easier to make a realistic sounding AI song in those genres.
Deezer, the French streaming service, has done a lot of research in this field, and they’ve said publicly that their research shows that 97% of listeners cannot tell the difference between an AI-generated song and a human-made song. So I think it’s very possible that some of these AI songs are being listened to and consumed by people who are not fully aware that they’re listening to AI music.
DEE PETERSCHMIDT: Well, I mean, do you think AI music has gotten like, quote unquote, “good” now? Are you one of those 97% people who has trouble telling the difference?
KRISTIN ROBINSON: Sometimes, it is hard to tell. I think the big tell still is that the audio quality isn’t fully– like, I don’t even how to describe it. It’s a little bit of a scratchiness or–
DEE PETERSCHMIDT: It’s a little digitally sounding.
KRISTIN ROBINSON: Yeah. It’s like– I don’t even how to describe it.
DEE PETERSCHMIDT: Yeah, the audio version of pixelated.
KRISTIN ROBINSON: Know what I mean. And I think that that’s the big tell. And if you’re in an environment where you don’t have good headphones, if you’re listening on your iPhone speaker, I think, it’s actually pretty easy to get fooled now. I mean, I guess, I would consider myself part of the 97%, although, I think, I can discern a lot better than your average person just based on the nature of my job.
DEE PETERSCHMIDT: Yes. You’ve talked to musicians like Imogen Heap, Charlie Puth about their use of AI. What sense do you get from musicians about how they feel about AI music? And maybe we can start with those examples, first.
KRISTIN ROBINSON: Well, Imogen Heap has always been on the cutting edge. If anyone who’s listening here is familiar with her work, she’s always been both a musician and a technologist. She really feels that technology can make her art more impactful and take her to new places creatively. So she’s leaning in pretty hard, but she is still very concerned about models that train AI music models on works like hers, without any compensation for those who they’re training on. So she still tries to stay away from companies like Suno, which currently have models that are being trained on copyrighted material without licensing or compensation for rights holders.
But yeah, I’m seeing musicians really divided. I don’t really think you can say, everyone’s doing this or everyone’s doing that. I would say that a shocking number of professional songwriters and producers have been telling me, mostly off the record, that they are using Suno as part of professional songwriting sessions now. And so a lot of them have posited to me that there are probably songs on the Hot 100 right now that have bits and pieces of AI-generated material that is not disclosed.
DEE PETERSCHMIDT: Right.
KRISTIN ROBINSON: So a little crazy to think about that.
DEE PETERSCHMIDT: I want to go to Suno, and can you give us an idea of who the main AI music companies are. And, we’ve talked about these meme songs. We’ve talked about helping with the production process. What exactly are they selling to people?
KRISTIN ROBINSON: Yeah, so when we think of generating songs at the click of a button, that is really dominated, at this point, by Suno. It’s an AI music startup. Suno is quite controversial in the music industry because people feel very threatened by them. I obtained an investor pitch deck of theirs back in November and reported that 7 million songs are being generated on Suno every day. That kind of scale scares musicians quite a bit, just in terms of– although those 7 million aren’t necessarily making it onto streaming services, a lot of them are, some of them are, and that potentially crowds out works made by human musicians.
So Suno is a big one. Udio is another big one. They did the same thing. You can type something into a text box and then out pops a song. Udio is now pivoting to do AI-powered remixing of already-made songs. This is a very popular category in AI music right now. Spotify is even getting into this realm soon. And basically what this means is that with licenses in place, you’d be able to take two of your favorite songs and create mashups, maybe remove the vocals so you can do a karaoke version. You can speed it up, can slow it down, all these kinds of things. So you can play with music that already exists.
DEE PETERSCHMIDT: Well, it’s funny because last year, the major music labels were trying to sue the heck out of these AI music companies.
KRISTIN ROBINSON: Yes.
DEE PETERSCHMIDT: Now, they’re partnering with them. What happened there?
KRISTIN ROBINSON: Yeah. So I think the music companies are really realizing that they can’t make this go away. And so they need to find a way to extract value from it. I think another thing to keep in mind is that very recently, within the last decade, two of the three major music companies became publicly-traded companies. So they’re probably getting a lot of shareholder pressure to innovate, to integrate AI, and to capture value there. They don’t want to be seen as weak. They don’t want to be seen like they’re behind the ball. So I think that that’s also one of the reasons why they have been so willing to try to find reconciliation.
DEE PETERSCHMIDT: Right. The music industry has gotten left behind a lot in the past in regards to tech, and seems like they’re changing their tune. So with all these recent deals, do you have a sense of where this is all heading? What do you have your eye on this year?
KRISTIN ROBINSON: Interestingly, the music AI game has mostly been dominated by startups. My take on that situation is that I think that music is a very hard thing to generate, and it’s also not something that’s a huge money maker. So I think it’s been largely ignored by your OpenAIs and Google’s of the world until now. But Google has launched Lyria 3, its latest AI music model on Gemini. Still not as good as Suno or Udio, but who knows, in the next year, how it will develop. And they also acquired an AI music company called Producer.ai. So I have my eye on Google, for sure, and I also have my eye on these new models from Suno and Udio.
DEE PETERSCHMIDT: OK, we’ll keep an eye on them, too.
Kristin Robinson is a Senior Writer at Billboard, who covers AI and the music industry. Thanks, Kristin.
KRISTIN ROBINSON: Thank you.
DEE PETERSCHMIDT: OK, stay with us because after the break, we have one of the first musicians who experimented with algorithmically-generated music back in the ’70s is, and we’ll hear her take on AI music.
[THEME MUSIC]
DEE PETERSCHMIDT: And now, a person with one-of-a-kind perspective on AI music, musician Laurie Spiegel, a pioneer of electronic music and also of algorithmically-generated music. And like today, it raised some eyebrows at the time. She wrote code for some of the first computer music technologies, and her 1980 album The Expanding Universe is considered one of the greatest ambient music albums of all time.
[LAURIE SPIEGEL, “PATCHWORK”]
Another song from that album is even on the Voyager Spacecraft’s Golden Record.
Laurie, it’s so great to have you here.
LAURIE SPIEGEL: Hi, glad to be here.
DEE PETERSCHMIDT: Did people think what you were making in the ’70s and ’80s was music? Kind of like this AI conversation right now, did you get shade when you got into this?
LAURIE SPIEGEL: There was a lot of heavy anti-computer sentiment back then because computers belonged to the most oppressive of organizations only. There weren’t personal computers, yet. And it was the government, the banks, the insurance companies, the military, who had computers. And the computers, innocent things that they were, inherited the image of the oppressiveness of their controllers in the public eye. Because computers, it was– they were called, they were inhuman. They were hostile to the arts. They were not the warm, cuddly, little laptops that we are used to at this point. So that, I was often accused of dehumanizing music. But of course, technology is the most human thing around. I mean, we are by far, the animal that does the most technology.
DEE PETERSCHMIDT: I think in the late ’70s, worked on an algorithm to replicate Bach’s harmonic style.
LAURIE SPIEGEL: Yeah. Bach is just a superlative ideal for me, an inspiration. And so I studied the harmonic progressions of Bach chorales extensively and wrote way simplified compared to the mind of Bach algorithm that basically generated harmonic progressions that I felt were meaningful.
[BAROQUE-STYLE HARMONIC PROGRESSION]
DEE PETERSCHMIDT: Yeah. And obviously, he’s a super mathematic kind of composer, and it makes some sense to be like, how can I translate this into an algorithm? I mean, the other side of the modern AI music side is there have been these studies of people who use these large language models experiencing something called deskilling, where you end up starting to rely so much on these models that you kind of end up outsourcing a lot of your own skill to them, and then that skill atrophies over time.
LAURIE SPIEGEL: And yet, different skills are evolved during that process because the writing of the prompts for an AI system is in itself at the very first stage of becoming an art form, I think. But it’s quite different from the moment-to-moment generating of sound in response to your momentary emotions, the self-expressiveness of playing music. And that is something that the way AI is being doing, by giving a prompt and then waiting for a fabricated result, is quite different from– I mean, the expressive nature of playing an instrument, it’s visceral. It’s tactile.
DEE PETERSCHMIDT: I mean, I’ve heard some music producers have talked about using one of these products and AI music because they don’t want to be left behind. And I’ve seen that language being used with other AI tools. What do you make of that in terms of– I mean, did you feel like you were going to get left behind back in the ’70s if you didn’t engage with computer programming and making fresh music?
LAURIE SPIEGEL: Just the opposite. I was kind of way out ahead to the point where it was impossible to explain to people what I was doing. I was not at all left behind. I was like on the lunatic fringe. I couldn’t explain to people. People would say, oh, do music. What kind of music do you do? And I would say, well, I’m using computers, and they would immediately, like, their expression would change, and they’d want to change the subject, too.
In the arts, it’s not a matter of keeping up. It’s a matter of something honest and authentic coming from inside of you, that you can embody in an experience external to you, that you can share with other people. Everybody’s always trying to keep up with what is new. That’s not what makes high quality artistic expression. It has to be from inside of you.
The music itself is what’s important. And that’s not something which is reliant on any individual technology. It’s gone through many centuries of evolution of different technologies, and it’s still obvious to us what’s really good music from the Renaissance that moves us and grabs us, or the early 20th century, or whatever. It’s what it does for us. What music does for us, that’s important. So I don’t think it’s really worth worrying about keeping up with tech. It’s just how you use it.
DEE PETERSCHMIDT: Yeah. Well, it seems like so many things with AI is forcing us to ask, like, these really basic questions about the things that we like, and why exactly we like them. You were just talking around that. But I mean, what does music mean to you?
LAURIE SPIEGEL: Oh, God, I don’t know. I mean, the question that I posted at the beginning, the question of what the purpose of music is for people, and what parts of what music does to us are these AIs able to satisfy. They obviously can generate music-like material on demand, but it’s not necessarily the expression of emotion or feeling.
I really do want to play with them a bit more. I know that the writing of prompts is a very indirect way of making music, much like writing all those little dots on staff paper with a pencil. And then you get back a result, which is not what you had anticipated, because they’re not really interactive. Emotions are kind of– we don’t understand them very well yet, but they are rock bottom, an essential component of music.
And this is where the AIs kind of fall down. They don’t have them. And while they will probably figure out how to trigger them and evoke them, eventually, and really good prompt writers might be able to do that, it still is very much in its infancy. These non-interactive generative parrots, I guess you could call them. They go around– they speak the language that they have read all over the net or throughout the repertoire, and they parrot it back, but they don’t understand it on a gut level that we humans experience it.
DEE PETERSCHMIDT: Laurie Spiegel, a pioneer of electronic music and algorithmically-generated music. Thanks for being with me, Laurie.
LAURIE SPIEGEL: Thank you for having me.
DEE PETERSCHMIDT: By the way, one of Laurie’s best-known pieces of software, Music Mouse, which she made in 1986, recently got rereleased on modern computers. It’s like an interactive instrument. You play with your mouse, where you basically drag your mouse around a musical grid, and it makes these fun chords and melodies. If you want to try it out, can find a link to it on our website, sciencefriday.com/music.
FLORA LICHTMAN: Thank you, Dee This fantastic episode was produced by Dee Peterschmidt. And listeners, if you have thoughts or feelings on this or anything else that we cover, we’re always here for it. 877-4-SCIFRI. Thank you for listening. We’ll see you tomorrow.
[THEME MUSIC]
Copyright © 2026 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Meet the Producer
About Dee Peterschmidt
Dee Peterschmidt is Science Friday’s audio production manager, hosted the podcast Universe of Art, and composes music for Science Friday’s podcasts. Their D&D character is a clumsy bard named Chip Chap Chopman.