Artificial intelligence is transforming music creation, fueling both innovation and industry-wide debate about its role in the creative process. However, while generative AI has dominated headlines for its ability to produce entire songs from a single prompt, the quiet evolution of AI-assisted production has been unfolding behind the scenes for decades. From a production standpoint, the real question isn’t “Will AI replace me?” but rather “How can AI work for me?”
We asked this exact question to Ben Porter (AKA benners), a London-based producer with millions of streams, an emerging presence in the R&B scene, and acclaimed studio collaborators including Grammy-nominated rapper Wale, rising R&B figure Arin Ray, and LA-based artist Marco McKinnis. With global collaborations already under his belt and dozens of releases slated for 2026, Porter offers the perspective of a producer working at the intersection of artistry and technology.
The History of Generative AI in Music Production
“There’s a tendency to believe that AI is a new phenomenon,” Porter said during a 2024 industry event with music metadata platform Byta. “But the reality is we’ve used algorithmic composition within music for the last 75 years.”
Indeed, the earliest example dates back to 1956 with the Illiac Suite, composed using one of the world’s first computers. In the 1980s, systems like David Cope’s EMI (Experiments in Musical Intelligence) introduced rule-based AI to stylistic composition. Generative AI in its current state emerged around 10 years ago, with Google’s Magenta, an AI research project, creating a 90-second piano melody generated by a recurrent neural network (RNN) trained on MIDI files.
A Positive Use Case: AI-Assisted Music Production
Porter shares his willingness to embrace AI-assistive tools within his workflow, citing examples that have become essential for many producers over the past decade. For him, these tools “aren’t about replacing producers; they’re about enhancing precision and efficiency.” A few standouts include:
- The iZotope Suite: a collection of AI-powered plugins for mixing, mastering, and audio repair.
- LANDR: An online platform (and optional plugin) that uses machine learning to automate audio mastering.
- Oeksound’s Soothe: A smart EQ plugin that automatically reduces harsh frequencies in real time.
“I remember when I was working on Handle That,” says Porter, referring to his 2025 placement on Kadeem Tyrell’s EP KT.FM. “The horn section was feeling noisy and a little harsh, so I used Soothe to tame it and iZotope to clean up the recording. And when I send demos or pitches to labels, I’ll often run them through LANDR to give the track loudness and polish. It’s great — like having a co-pilot.”
For many music producers, technology like this represents an opportunity to optimize their workflow. No longer do hours need to be spent in the studio cutting out a resonant frequency or polishing a master. In Porter’s words, “they handle the more mundane tasks — like EQ tweaks or audio repair — so I can focus on performance, arrangement, and emotional nuance.”
The Four Faces of AI in Music Creation
Porter outlines four primary ways AI is being used in music creation today:
- AI-Assisted Creation: plugins and tools that support, but don’t replace, human decision-making. For producers, this is by far the most beneficial.
- User-Driven Generation: Systems like BandLab SongStarter that remix or assemble user-selected stems.
- AI-Imitative Generation: Tools that mimic the style or voice of real artists, often for derivative works.
- AI-Composed Music: Fully generative systems (e.g., MusicGen, MusicLM) that can compose new tracks without human input.
An Ethical Lens on AI-Assisted Music Production
Naturally, with greater capability comes greater responsibility, and generative AI raises serious ethical concerns around consent, compensation, and originality. However, this can be separated from the productive and creativity-enhancing uses of the technology.
“An example I always cite is ‘Now and Then’ by The Beatles,” Porter says. “It used AI ethically to isolate John Lennon’s vocals from a rough tape, not to replicate or fabricate them. That’s a productive use of the tech, not a replacement of human artistry.” His dual perspective as a record producer who has worked with Grammy-winning talent and as a recognized voice in the AI conversation offers rare clarity on the issue.
For better or worse, AI is going to exist in music. You don’t need to become “an ethicist” overnight, but you do need to “understand the tools you’re using, what they’re built on, and how they might serve you.”
How Producers Can Leverage AI, Responsibly
For producers looking to integrate AI into their studio flow, Porter offers a few starting points:
- Start with Assistive Tools: Explore smart plugins that help you mix, master, or clean audio more efficiently.
- Avoid Generative Creation: Be cautious of tools that claim to mimic famous artists or copyrighted sounds. Even if you won’t get caught, consider the broader impact you’re contributing to.
- Stay Informed: Follow updates on tools, rights, and regulations, especially as laws evolve globally.
“AI can absolutely accelerate creativity,” Porter says. “But it’s no substitute for vision or taste. The best artists will learn how to harness it, not rely on it.”
That belief is reflected in Porter’s own momentum. With over 30 new tracks scheduled for release globally in 2026, he sees AI as a tool that can streamline the process — but never as a replacement for the creativity that drives his work. “AI might help me get to the finish line faster,’ he says, ‘but the vision, taste, and connection always come from people.”