AI-Generated Music: How It Works, Who Owns It, and What It Means for Artists
In April 2023, a track called “Heart on My Sleeve” — featuring eerily convincing AI-generated vocals cloned from Drake and The Weeknd — racked up millions of streams before being pulled from platforms under copyright pressure. It wasn’t officially released by either artist, and no human had written a melody or lyric in the traditional sense. The music industry had a crisis meeting it had been postponing for years.
AI-generated music is no longer an interesting edge case. It’s a commercial reality with serious legal grey areas, a growing ecosystem of tools, and legitimate questions about what it means for human artists, listeners, and the cultural value of music itself. Here’s a clear-eyed look at where things actually stand.
How AI Music Generation Actually Works
Modern AI music tools are built on large generative models trained on vast libraries of recorded music, MIDI data, and in some cases, lyrics and audio waveforms simultaneously. The core technology varies by approach.
Transformer-based models (like those underlying OpenAI’s MuseNet and Google’s MusicLM) treat music as a sequence prediction problem — similar to how language models predict the next word, these models predict the next note, chord, or audio token. Given a text prompt like “melancholic piano piece in the style of Satie,” the model generates music by sampling from a probability distribution learned during training.
Diffusion models (used by tools like Stability AI’s Stable Audio) work differently: they start with audio noise and iteratively refine it toward a target based on a text description, similar to how image diffusion models like DALL-E generate pictures.
Voice cloning — the technology behind the Drake/Weeknd incident — uses a separate class of models trained to extract and replicate the acoustic characteristics of a specific human voice. Tools like ElevenLabs and RVC (Retrieval-based Voice Conversion) have made this accessible to anyone with a few minutes of audio and a consumer PC.
The result: Suno AI can produce a complete song with instruments, vocals, and lyrics from a single text prompt in under 30 seconds. The production quality, while not indistinguishable from major-label releases, is often good enough to function in playlists, ads, games, and video content.
The Copyright Problem Is Genuinely Unresolved
The legal situation around AI-generated music is in active flux, and anyone telling you the answer is settled is wrong. There are at least three distinct legal questions in play simultaneously.
Training data copyright: Most AI music models were trained on copyrighted recordings without licenses. In 2023, Universal Music Group, Sony Music, and Warner Music filed lawsuits against Suno and Udio, alleging copyright infringement in the training process. These cases are working through the courts and will likely set significant precedents.
Ownership of outputs: In 2023, the U.S. Copyright Office ruled that AI-generated works with no human authorship are not eligible for copyright protection. However, works with “sufficient human authorship” — where a human made meaningful creative choices in the process — may qualify. The threshold for “sufficient” remains contested and case-specific.
Voice and likeness rights: Using AI to clone a specific artist’s voice without consent implicates right-of-publicity laws, which vary significantly by US state and internationally. The proposed NO FAKES Act would create a federal right for individuals to control AI replicas of their voice and likeness — but as of 2024, it has not been passed into law.
How Artists Are Actually Responding
The music industry’s response to AI has been more nuanced than either “embrace everything” or “ban it all.” A few notable positions have emerged.
Grimes publicly offered to split royalties 50/50 on any AI-generated song that used her voice, essentially inviting fans to clone her vocals legally. Holly Herndon has built an artistic practice explicitly around AI voice models, creating a licensed “Holly+” model that others can use to generate music in her style.
Many artists and their labels have taken the opposite stance, working to remove AI-generated content that uses their likeness from platforms and pressing for stronger legislation. The Artists Rights Alliance, a coalition including major artists, released an open letter in 2024 calling on tech companies not to use AI to “replace human artists.”
A third group — perhaps the largest — is quietly experimenting. Producers, film composers, and game audio teams are incorporating AI tools into their workflows for tasks like generating stems, drafting demo arrangements, or rapidly prototyping soundscapes — while still considering the final output “their” creative work.
What Streaming Platforms Are Doing
Spotify, Apple Music, and YouTube are under pressure from both directions: rights holders want AI-generated content flagged or removed, while AI tool companies want their outputs monetized through the same channels as human music. Spotify introduced a policy in 2024 requiring AI-generated music to be labeled as such and prohibiting content that uses AI to clone real artists’ voices without permission. How consistently this is enforced remains an open question.
There’s also a quieter concern: some streaming platforms have been accused of filling their own playlists with AI-generated tracks — inexpensive to produce and royalty-free — pushing human artists further down listening queues. The line between curation and commercial self-interest is increasingly blurry.
The Bigger Question
AI music tools are, at their best, genuinely exciting creative instruments. A bedroom producer with no access to a string section can now mock up an orchestral arrangement. A game developer can generate adaptive ambient music that responds to gameplay in real time. A songwriter can rapidly prototype ten melodic ideas in an hour instead of one.
But music has always been more than a sonic product — it’s a record of human experience, processed through a specific person at a specific moment. When you learn that your favorite song was written at 3am after a breakup, that context changes how you hear it. AI-generated music, by definition, has no experience behind it. Whether that absence matters — whether listeners will come to care — is the most interesting open question the technology poses.
The answer probably isn’t binary. AI music and human music will coexist, serve different purposes, and — in the hands of thoughtful artists — sometimes combine into something genuinely new. What the industry cannot afford is letting the legal and ethical framework lag so far behind the technology that the damage to human artists becomes irreversible before anyone addresses it.

