Authenticity, autotuned: What AI is doing to music
And why raw human talent will become crucial to cut through the noise
The year is 2002, and I’m cradling my backpack in the backseat of my mom’s car with my eyes on the clock: it’s 3:17PM and she’s not driving fast enough. When we finally arrive home, I take the stairs two at a time until I’m sitting on my bedroom floor, drawing out a small treasure from the depths of my bag — Avril Lavigne’s debut album, freshly borrowed from a girl in my class who made me promise to have it back to her by morning.
I place the CD into my Sony Walkman, gingerly press the triangular play button and sit back as Lavigne begins to sing. Over the next few hours, I would replay that album until the angsty pop-punk songs of Let Go became sewn into my soul. So many years later, I can’t really remember the words, but what I remember are the feelings — an acute sensation of being seen and the exhilarating comfort of connection that entered through my ears, bypassed my mind and reverberated through my heart.
We all have an Avril Lavigne: a musical artist who spurred our musical awakening through a precise blend of notes and lyrics. For as long as humans have existed, music has been our mechanism for transcending differences and for coming together to lament and revel in what it means to be uniquely human.
Now, though, that’s changing. Artificial intelligence (AI) is pushing its way onto center stage as AI-powered tools infiltrate the music industry, raising questions of how AI will reshape music making and human artistry in the advent of an intelligence that can play every instrument, hit every high note and do it all in less than a few seconds.
Because such capabilities — and many more — already exist. Typing “music” into AI tool aggregator There’s An AI For That yields thousands of tools that facilitate everything from guitar guidance to music marketing to creating Taylor Swift-style chords via this GPT’s “unique approach to songwriting.” Platforms cater to nearly every stage of the creative process: Lyrical Labs, an AI-powered songwriting tool, pens lyrics; Landr auto-masters your track; and Singify promises, “Music Creation Has Never Been Easier,” with its AI-generated voice models that can be manipulated and inserted into your songs. And then there are tools like Soundful and Suno — all-in-one solutions that enable users with no prior musical training to create entirely new works in any real or imagined genre using only a few lines of text.
All of this is not happening in a black box. Music companies have responded, with Sony Music recently warning tech companies against using Sony’s music in the training of AI systems and Universal Music Group suing Anthropic, alleging “systematic and widespread infringement of their copyrighted song lyrics.” Musicians have stood before the US Congress, imploring lawmakers to create legislation such as Tennessee’s ELVIS Act—a “first-of-its-kind” bill—to protect artists’ likenesses from unauthorized AI deepfakes and voice clones.
And yet despite such backlash, AI music platforms are gaining momentum. Suno (which has yet to reveal its training data sources) recently raised $125M in funding in their latest round and BandLab Technologies, a digital audio workstation platform for both novice and experienced musicians, has seen double-digit growth to now boast more than 100 million users worldwide. Unlike Suno, though, BandLab is open with how it trains its AI models — by paying human artists to create music in the genres and styles they need for their AI-powered tool.
“BandLab works really hard not to exploit its users,” Maxime Stinnett, Senior Product Manager at BandLab and jazz musician, tells me one day as we discuss the current state of music, sharing that the company strives for data dignity and hopes that its transparent approach in how it trains its models will appeal to human music makers. “BandLab is trying to do these things right so that we can effectively democratize and lower the barrier to entry for music making.”
This is a central argument in favor of AI-powered music tools — that they “democratize” the music making process, qualifying anyone with a mobile phone and an Internet connection to produce music. “I was talking to a customer who said, ‘I’ve never made music — I just wanted to one day,’” Stinnett said of an artist who used BandLab to kickstart his career. “I think about how I found him because he was charting — his music was that beautiful. It’s amazing what these tools can do in the right hands.”
But while tech companies are buying into such individuals as artists, some record labels and trained musicians are more hesitant. Matt Rubano is a musician and composer (Taking Back Sunday, Angels & Airwaves, Lauryn Hill) whose twenty year career has touched nearly every facet of musical production and performance. When discussing the idea that AI companies are democratizing music making, Matt told me:
“I don’t think there’s any nobility in democratizing making music. If you don’t know how to make music but are pretending you do, you’ll always be standing on a paper thin foundation. To claim that not knowing music theory or how to play an instrument are your barriers of entry to music — that’s like saying I want to open a restaurant, but I just don’t like touching food.”
A world where everyone has equal access to every tool needed to create music means human creativity can and will be augmented — but it also means that the value we place on human musical talent is likely to shift. Annie Aberle, Senior Vice President and Head of Creative at PULSE Music Group, sees a trend in which raw human musical talent actually becomes much more valuable.
“You can make a record that sounds like a live band [using AI tools], sure,” Annie tells me one evening. “But people have such discerning eyes. If they realize you’re ripping off live musicians for the result of something — they will find out and it will not go well.”
And, yet, AI-generated music appears to be gaining traction anyway. YouTube’s “Dream Track” feature enables users to create 30-second instrumental soundtracks for Shorts featuring the AI-generated voices of artists like Demi Lovato and John Legend, while A-List musicians like Drake have experimented with using AI-generated voices. Despite the fact that we know these songs are fake, we’re leaning back in our chairs, plugging in our headphones and listening anyway.
It’s important to note that AI is transforming music on a case by case basis. That from one day to the next, we won’t categorically see the entire function of music making become automated. This process will be much more gradual: background music for advertising videos, scores for small indie films, and royalty-free music intended for social media reels are examples of use cases most likely to be automated first. But that doesn’t mean that artists of all skill sets and backgrounds aren’t consumed by what AI means for their craft — and for their livelihoods.
“A lot of people in music are scared and aren’t buying into it,” Aberle said. “AI is like Pandora’s Box — there’s this feeling that once you open it and really start to look around, it’s going to be a doomsday for creatives.”
The music industry is well-versed in needing to adapt to technology. But while past inventions have primarily prompted changes in the distribution and consumption of music, the AI revolution is altering something that feels much more vulnerable: the necessity of the human artist in the line of work that many have pursued in complete devotion.
“Every artist, period, comes from a place of ‘this is it. There is no plan B. I don’t have another life path.’” Aberle said, noting that it’s a common practice within the industry to ask an artist if there’s anything else more “logical” that they can see themselves doing. “If there is, they’re told to go and do that. That’s how hard it is.”
And so even with their fears around AI, Aberle said that she can see a world where artists do pick up AI tools en masse to get ahead. But what they really should be doing, Aberle argues, is focusing on their uniquely human differentiators. In this evolving era that enables any person to create a tune with a few taps of the keyboard, the very human skills of knowing how to play instruments, how to resonate deeply with a crowd during live performances and how to communicate a compelling story are age-old components of musical success that not only aren’t going anywhere, but have become more important than ever against an endless slew of auto-tune.
Most essential, Aberle argues, is the listener’s ability to identify an artist’s music as being human. “If the story and the concept and the lyrics aren’t telling a story that hasn’t been heard before — there’s nothing there,” Aberle said. “People — this new generation especially — want authenticity. They want the raw, they want the grit. They don’t want things to sound perfect.”
This, I think, speaks to that ineffable quality of music that makes it such an essential component of our being — it’s the reason I laid on my bedroom floor for hours listening to Avril Lavigne sing “Complicated” on repeat, it’s the reason we line up for hours to see an artist play live, it’s the reason we get goosebumps when a song seems to speak to our exact brand of emotion in a given moment.
We are looking to graze elbows with humanity in its purest form — a form that is only accessed when we are making real, vulnerable, authentic, imperfect human art. When human artists reach their raw voices into the void, we absorb their stories and experiences as our own. We connect with one another through pure vibration. If we lose the human behind the music, we lose this essential channel for human to human connection; we forfeit one of our avenues towards a more interwoven humanity. This makes preserving the human element in music not just necessary for preserving the human artistry of music making, but essential to maintaining and deepening our overall collective interconnectedness.
So how do we move forward? As long as we maintain AI’s status as tools in the music creation process, rather than allowing them to become agents that can fully replace the artist itself, we can eliminate arduous tasks while freeing up time for what humans do most naturally — unfiltered creativity.
“That’s what keeps me feeling positive — is believing that we’ll always be one step ahead,” Stinnett said when asked what kind of future he hopes for, adding that there are AI tools that automate tasks like social media marketing and booking gigs — time-intensive tasks that would free up more time for creating. “Humans are brilliant and wonderful and incredibly multifaceted and unique and you know, this is what’s ultimately going to keep music moving forward.”
I wanted to test Stinnett’s theory — that humans are ultimately going to keep music moving forward. Curious to see if I could create an Avril Lavigne-inspired track that resonated emotionally, I prompted Suno AI to create: “A punk rock pop song sung by a female about the struggles of adolescence and the heartbreak of teenage romance.”
The result? A song titled “Rebel Heartbreak” (you can listen to it above) chock full of clichéd lyrics sung by a half-digital, half-human entity that left me feeling indescribably hollow. I can’t imagine nine-year-old me racing home to listen to this once, much less put it on repeat for hours. For now, at least, human produced art feels even more beautiful and more rare — worthy of not just hearing once, but listening to again and again.
“I don’t know if I’ll 100% value the AI artist,” Rubano shared as we discuss this. “When my grandkids are like, ‘Yeah, we like this thing that’s not really human but we love it. It’s so cool.’ I’m sure I’ll listen to it and be able to like, tap my foot to it. But I’m also sure I won’t give a shit.”
Have you experimented with AI music? Want do you want to see — and hear — in the future of music? I’d love to know.
Until next time,
Cecilia
RemAIning Human is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Building community around an essential topic
Know someone whose interested in learning more about AI? Forward on this email or click the button below to share.
RemAIning Human is written and edited by Cecilia Callas.
Great article Cecilia! I went to see Paloma Faith last week and loved how she wove stories about her life into her performance to give context for her latest album 'The Glorification of Sadness.' You know you're listening to lyrics and melodies that have been written from the heart of a human being when you've heard the story. Who knows what the future of music or any other type of artistry will be with AI. It may be that in a world where anyone can generate anything with AI, the value of authentic human artistry will be greater than ever. Here's hoping!