Human Vs. AI Mastering
- Clay Francis
- Dec 2
- 3 min read
I find it interesting to see AI's sudden surge into the public scope, specifically in music production. AI tools have been around for over a decade in the Mastering sphere – Landr, for example. Of course, given the creep of AI into music composition, mixing, and basically everything else, it seems more valuable now to actually look at AI's role versus the traditional human methods of everything music, and for the point of this blog, specifically mastering.
Firstly, I think its important to determine what mastering is. If you are on my page, you probably at least have a vague idea. Mastering is the final step music goes through before release. There are two main goals that you should have going into mastering: the first, being to optimize the levels, balance, and dynamics of the mix for its intended release format– most commonly streaming platforms. The second, is more artistic in nature, and is based on finding the best most appealing aspects of the song as presented in the mix, and allow those aspects to shine through for the listeners. I think it is important in determining AI's role in mastering, to touch on these two key aspects separately.
Regarding the first point – technical optimization for release format – this is I think where there is going to be the most similarity between a human mastering engineer and an AI alternative. Computers are good at optimization. Reading data and aligning it to targeted specifications are a computers bread and butter. If an AI mastering tool gets a song, it can read all the metrics about that song and bring them as close as it can to its referenced material. So if the AI mastering tool is programmed to bring material to -8LUFS with peaks at -0.5dB, it will do exactly that. A human, similarly, will accomplish this same task if that is its sole objective.
If the sole goal of mastering is solely technical, AI frankly would be the winner as it is going to be more cost effective and achieve the same technical end result. However, the second purpose of mastering, the artistic objective, is where AI will always fall down.
Long before the advent of AI mastering tools and other automated mastering processes, including Izotopes Ozone, artists, producers, and mixers would seek out skilled mastering engineers whose style, workflow, reputation, and equipment would provide a specific and unique sound. There is not, nor has there ever been, one singularly best mastering engineer. Though mastering is often seen as subtle, the differences from one mastering engineer to the next have been enough to draw those seeking mastering to one engineer or another. So what does this mean in the context of AI?
AI does not have a style. It does not have a workflow. It doesn't have a reputation. And it doesn't have tools that it chooses based on a human listener perspective. And the human listener perspective is where the human mastering engineer makes their careful choices. Fundamentally, there will always be a difference in results.
There is an argument that mastering is so subtle that if a mix is excellent, an AI tool would be totally sufficient as no further artistic changes would need to be made. If this is the case, however, would it not be worthwhile to have an additional set of trained human ears to confirm? A good mastering engineer is capable of doing very much, or very little. If you have already invested the time and effort into writing the songs, practising them, re-working them in production, recording them, and mixing them, I personally see that it would make sense to complete the job in the human world.
That all being said, I haven't actually heard any AI mastering that comes even close to what many new basement producers can do with their DAW's stock limiter and an EQ. I would be quite eager to hear something if its good though, so feel free to let me know if you have sample files of pre/post-mastering that seem to be good!
Comments