By Shane Johnson
Still in the wake of the successful OD DeEdger plug-in release, mastering engineer and software developer Jan Ohlhorst sits down with us in the lab to discuss mastering, loudness, dynamics, plug-ins, and much more.
Located in Bavaria, Germany and working out of his top-notch facility finemastering, he’s one of the most in-demand mastering engineers out there today. Along with numerous mastered number-one singles, albums, and music award nominations, he’s also an accomplished software developer on the technology side. His software brand, Ohlhorst Digital, develops cutting-edge DSP tools for the professional audio market worldwide. In this article we go one-on-one with mastering engineer and software developer Jan Ohlhorst, as we get a tour de force in the everyday trenches of mastering, plug-ins and a whole lot more.
Greetings Jan. Thanks for joining us here at Tokyo Dawn Labs. Can you tell us briefly how you got started in the business? When did you decide that mastering was going to be your life’s passion?
I was always very fascinated with music and really wanted to understand how it was made ever since I was a small child. My first step making music was when I was 13 years old. I started with tracker software (Scream Tracker & Fast Tracker), then later moved on to MIDI based synthesizers and a small mixer. I learned the technical things very fast, but soon realized that I would need some more help with the right placement of all the notes. From there, I took some lessons in music theory and harmony. My teacher realized that my technical skills were much better than my musical skills, so he asked me if I was interested in mixing live music since he played in a top 40 band that was seeking a mixer. I was around 18 when I started mixing live bands and learned a lot from that experience. The money that I earned from mixing I invested into my studio which continued to grow. I continued making music and also helped other local producers and DJ’s with mixing etc. I also recorded a lot of choirs and classical music, but then realized that mixing and mastering were my real passion.
How did you learn the fine art of mastering?
It’s always been my opinion that mastering was something someone shouldn’t directly start with because of the technical perspective. I found it really beneficial to learn a bit of the whole production process, from composing, arrangement, recording, editing, to mixing and mastering. Everything that trained my ears was helpful. I also learned a lot when I wanted to improve my room acoustics. With my room, I read and experimented a lot, which also later helped me during the development of the speaker system that I’m using.
How has mastering changed over the past ten or more years from when you first got started?
Maybe it got easier since the tools got better, especially the digital ones. You also have better access to knowledge now. When I started there wasn’t much information out there, so you needed to try everything yourself. I guess some people might disagree here though, since too much knowledge could also be cumbersome and lead to wrong conclusions.
Do you have a general modus operandi or any particular system of approach when you start a mastering project?
I’ll first listen to each song for a few seconds to see if there are any potential problems that may need to be communicated. These could be changes needed in a mix, or even requesting the delivery of stem tracks to avoid making compromises. It’s simply a technical cross-check I typically do on the day I receive the mixes, even if the mastering session itself is scheduled on a different day. When I do the mastering, I’ll listen to the songs a bit longer to find out their mood and message. This is typically the time when I get the vision in my head of how it should sound. For example, is the sound dark? Is the message positive, melancholic or sad? If a song has a sad message, it will get an appropriate corresponding sound, such as adding some nice highs with an analog tube EQ for example.
Is your mastering approach different when your doing a whole album as opposed to just a single song?
Technically, not really. When I master an album I’ll also look at the flow, such as levels and the sound relationship between the songs etc.
How long does the average mastering job take per song?
Maybe about 30 minutes. Downloading, communication, recall notes, invoicing, and so forth, would not be included.
Are there any styles of music that are more difficult or easier for you to work on?
No. Maybe because my personal musical taste is not genre specific, as I can find good music in every style.
You’ve worked with musicians, producers, independent and major record labels from all over the world, including Germany, USA, Canada, Australia, Russia, Switzerland, Austria, and China, etc. Are there any differences when it comes to mastering projects from city to city or country to country?
I would say that projects from different regions don’t have any impact on my decisions.
Can you walk us through your typical signal path?
From a high level perspective, my signal path is essentially digital > analog > digital. This means that I can choose to have digital processing before and/or after the analog chain. I use a Mytek 8×192 for the AD/DA conversion. My outboard gear is connected to a custom relay based insert switcher built by Markus Samper. That really helps with the decision making process, as it’s more reliable and I can quickly see what outboard is in use. I can also do routings such as using two compressors in parallel, or only part of the box tone of an EQ if I needed to. This gives me the extended possibility to apply the desired amount of analog tone, saturation, and texture to the signal if needed.
What are you currently using for a DAW?
I’ve been using Cockos’ REAPER for over 7 years now and loving it. It is fast, stable, and moreover, heavily customizable! I’m faster and more reliable with it than any other DAW I’ve tried out for mastering. When I started using it, I couldn’t find any other professional mastering engineers who also used it, therefore I’m using a lot of self-written scripts that simplify and speed up my workflow. The REAPER community is great and nowadays you get a lot of scripts that you can download with ReaPack (built in scripts downloader), which is only a mouse click away.
What are you using for monitors?
I’m working on a Suter/Ohlhorst System 515/99. It’s a modular loudspeaker system that I developed with my friend Dan Suter, who is a reputable mastering engineer from Switzerland.
Do you listen only using one set of monitors, or do you also listen using nearfield monitors?
I’ve done that in the past when working on other speakers, but these days I completely trust my current system which is perfectly adjusted and calibrated to the room.
Can you tell us a bit about your room?
My room is about 28 m² (301 ft²) in size. I’ve experimented a lot with room acoustics and also became acoustically familiar with other recording, mixing, and mastering rooms I’ve worked out of. Since I’m very sensitive to changes, some acoustical setups work for me while others don’t, therefore my own room acoustics are very specific. The front part of the room is mostly about absorption with a bit of deflection. I love diffusors when placed right. The big one on the back wall, which is a very large bandwidth custom design, is very important to me. The smaller diffusors on the ceiling and side walls in the back part of the room also play a significant role.
Is all of your processing done in the analog realm?
It’s never entirely analog, but more of a hybrid approach. So, a combination of both analog and digital tools get used mostly. Sometimes it can be all digital only, but it really depends on what the project needs.
Do you have any favorite go-to plug-ins that you use during the mastering process?
Voxengo SPAN is used in all of my projects for spectral metering, as I love its interface and flexibility. Voxengo GlissEQ is my main minimum phase EQ for equalization. Since all the digital minimum phase EQs basically sound the same, which can easily be confirmed with a phase difference test, it comes down to its usability for me. I like the FabFilter Pro-Q 2 for linear and natural phase modes. My favorite for compression is TDR Kotelnikov GE, as it’s very flexible and transparent. Sometimes I’ll use the Solid Bus Comp and the Vari Comp by Native Instruments.
When it comes to multiband compression and dynamic EQ, I’m mostly using FabFilter Pro-MB and TDR Nova GE. I use several different plug-ins for limiting, which changes depending on the project. My main ones though are Voxengo Elephant and DMG Audio Limitless. Last but not least, DeEdger is also one of my go-to plug-ins. I also use my own proprietary developed plug-ins, which I unfortunately can’t talk about yet.
Do any projects ever come in that already have crushed levels?
Yes, but that apparently has decreased over the past few years. Though some mixes might be delivered quite hot, it’s fine for me as long as it sounds good. It mainly gets problematic if the mix itself has issues, which could better be solved with a more dynamic version.
What are your thoughts on the ongoing loudness wars?
When we talk about loudness, we’re also talking about density. My focus is more so on the latter and defining an appropriate “distance” of the elements in a mix.
Have you ever got caught up in it yourself?
I use to get asked frequently for super loud masters up until a few years ago, but these days it’s much less. Since the tools have become better, it’s now easier to make it loud while still sounding good. On the other hand, it’s interesting to see super dynamic masters of music where I would actually prefer a bit more density.
Do you ever add any effects or get asked to add effects?
It’s extremely rare on full mixes, but sometimes on stems. In either case, I’d communicate it to the client.
Are there any common problems you find with mixes you get for mastering?
I strictly distinguish between problems that are solvable, which I call offset problems, and problems that might lead to compromises. In the former, for example, a mix that is simply too dark could be completely solved with a high shelf EQ boost. When I mix myself, I actually like to mix a bit dark. This allows for a decision later on in the mastering stage of what kind of EQ to use, such as one that’s clean or one that’s more colored. Some examples of problems that might lead to compromises would be unbalanced levels or an uncontrolled bass area. The latter typically comes from non-optimal room acoustics.
What’s one of the hardest things for you to do during mastering?
Technically, nothing that I’m aware of. On rare occasions if a client can’t decide, such as between a revision request in the 0.2 dB area, then it’s important to communicate and help with the decisions.
Since you use a hybrid combination of analog and digital tools, what’s your approach to recalling sessions?
I take photos of all my analog gear settings. I always start with a photo of my screen, so that I have the date, time, project name and version, and current name of the song. I then take photos of the hardware settings used for the song. Taking a photo of the screen was actually inspired by my colleague Brad Blackwood. I saw photos of him showing the screen of his sessions on facebook and thought, “That’s actually a clever idea for making part of the archiving!”.
What do you think makes a great mastering engineer in this modern era of music?
A great mastering engineer is a great mastering engineer. I don’t think it’s really era related, as you still see a lot of the older mastering engineers doing an outstanding job. It’s all about the experience.
How long do you feel it takes for someone to get to that level of skill?
Honestly speaking, I think you need at least 10 years of professional experience to be at least “ok”. This means that you can improve a mix to a degree that your client can’t by himself. Skill and knowledge is only part of the equation though. You also need a very good room and speakers that you can completely trust.
What do you enjoy the most out of mastering?
I love working with music and the people who share the same love for it. Along with liking all the technical aspects, I also enjoy the mindset, like trying to connect emotionally with the project as much as possible. That’s an area where I grew the most in the last few years. I’ve had sessions where I got tears in my eyes because I knew the lyrics from the artist were true, and also very sad.
Along with being a professional mastering engineer, you’re also a very accomplished software developer. Your first professional software-based audio signal processing tool is the recently released and very successful DeEdger plug-in. How did DeEdger come about?
The main reason I started developing my own plug-ins and algorithms was because there were no alternatives available on the market. The skill set needed for this I acquired at the university where I learned object oriented programming and digital signal processing. My own plug-ins are unique and solve problems I couldn’t have solved otherwise in the way that I like. Hardness and edginess in audio signals are a common problem, so I first needed to understand in detail what caused it. I started investigating this around 10 years ago and was getting a good idea when I was working on mixes. That was the early conceptual beginning of DeEdger.
How long did it take you to develop the idea?
The idea for DeEdger came about 3 years ago after I made several iterations with other algorithms that I originally discarded. The amount of parameters in the initial version was much higher. The development version had maybe 30 to 40 more parameters than the currently available public release. When an algorithm looks promising, I’ll try using it on my client’s projects. This is when I’ll tweak it to find the sweet spots and weak spots. If I’m not completely satisfied, I’ll remove it and try to achieve a similar result with other tools for comparison. After further refinements of the algorithm, there’s eventually that moment when removing it from the project will no longer sound as good as when it was inserted. That’s the point when I know I’m heading in the right direction.
In basic layman’s terms, what exactly is DeEdger doing to the audio signal under the hood?
DeEdger’s algorithm operates mainly in the time domain. Since most of the attributes defining hardness are in that domain, it specifically operates on the transients in a user defined frequency band. I mentioned earlier that it has about 30 to 40 parameters, but most of them all operate under the hood and not in the actual graphical user interface. A lot of work, accompanied by my mastering engineer knowledge, went into automating all those internal parameters in order to remove any weak spots and maximize the sweet spots. My aim has always been to offer tools that the user can set up which helps improve their sound. That’s typically one of the tasks that takes the most amount of development.
Do you use DeEdger on your own client based mastering projects? If so, how do you use it in your own mastering signal path?
Yes! I use it on almost every project. Sometimes more, and sometimes less. In particular, when I’m working with stems, there might be several instances with different settings on a given stem track.
Where in the signal chain would you recommend DeEdger be inserted when used for mastering?
I typically use it right at the beginning of the chain. I generally try to fix problems early on in the chain so that any preceding tools, like the analog outboard gear, can be used for broader shaping and texturing.
Can you give us some general starting point DeEdger settings for mastering that will help us get started?
Sure. I have some general starting points that I’d like to share. On the master bus I use it mostly in the 3 kHz-7 kHz FREQ range with the Q between 0.7-2 and the DEPTH in the 5-8 area. On stems, the DEPTH might go up to 10 and I’ll even use several instances in series with different FREQ and DEPTH settings.
Once DeEdger is initially set up, are there any parameters you’d recommend the user focus on for further fine tuning?
Using Focus Listen together with toggling the Active button is definitely something I’d recommend. This helps with finding the problematic frequency band. Next would be setting the right amount of depth.
Do you have any future development plans with DeEdger? Anything you can share with us that we might expect to see down the road?
The current feature set is quite stable at this point and everything is working as intended. There are some feature requests, like the option of listening to the delta signal and variable linking/weighting of L/R and M/S correspondingly. I like the ideas, but they need to fit into the philosophy and paradigm. Adding more options can sometimes increase the probability of confusing the user with an overloaded GUI. It can also potentially slow down workflow and increase the amount of weak spots. I don’t think that needs to be the case with the above requests, but they’ll need to be implemented correctly and feel 100% right for me.
What’s next in the Ohlhorst Digital product line? Is there anything you have currently in development that you can tell us about?
I have roughly 10 nearly finished algorithms and concepts from my list of over 20 ideas. Most of them I’ve been using over the past several years for my work as a mastering engineer. Immediately after the release of DeEdger I was working on three different plug-ins, but now I’ve just been focussing on a specific one. I can’t reveal a lot about it yet, other than it’s going to be something very unique again. I hope I can release it this year.
In closing, are there any final words of wisdom or advice you can offer to someone who’s new at learning how to master?
Be kind, be open minded, always rethink your approaches, and continue to try out new workflows. I also recommend redoing some jobs you did a few years ago and comparing the results, which is always a great exercise to do.
Thanks again for joining us here today at Tokyo Dawn labs. We’re looking forward to having you back again soon.
Thanks for having me. It’s been a pleasure.
That ends off our inaugural interview with mastering engineer and software developer Jan Ohlhorst. Included below is the starting point mastering preset for DeEdger that Jan graciously shared with us during the course of the interview. This preset is intended to be used as a starting point in DeEdger on the master bus:
Mastering Starting Point Preset
Copy the following code > right-click on the plug-in interface background > Paste State(Ctrl+V)
<DeEdger knob_freq_param="3000" knob_q_param="0.7" knob_depth_param="5.0" button_active_param="On" button_compensate_param="On" button_focus_listen_param="Off" mode_param="L/R"/>