Tom Holkenborg, known to his fans as Junkie XL, is one of those rare
artists who truly merits the oft-used praise “Renaissance man.” His main
entrance onto the world stage was his 2002 remix of Elvis Presley’s “A
Little Less Conversation,” and since, he’s done official remixes for
Coldplay, Madonna, Justin Timberlake, Michel Bublé, and countless
others. His composing talents have yielded original electronic dance
music that keeps the critical listener engaged even as it draws the
ravers in droves; an 18-year-long list of scores for top-selling video
games; and collaboration with Hans Zimmer on such films as The Dark Knight Rises, Madagascar 3, and Man of Steel.
On the cover art for the new album, you’re right there
with a synthesizer—not a laptop, DJ rig, or roomful of ravers, but a
keyboard. Is there a statement there?
There is. Every sound on the album has been somewhat
synthesized—obviously synthesizers themselves but also sound design on
acoustic drums, bass, and guitars. It also refers to how people now seem
to live a synthesized life. It feels different than 15 or 20 years ago
when people actually hung out. Now a lot of it is through social media
without people being together at the same time. So having a synthesizer
on the cover goes to many different levels.
Unlike a lot of electronic records today, this one sounds very played—almost like you’d recorded on tape and had to capture song-length performances.
Well, I fell in love with electronic instruments from the
start. I listened to all the Trevor Horn productions when they came out,
and I loved the electronic approach before dance music started. In the
’80s, I started working in a music store, selling all these instruments
and that’s when I really fell in love with it. But before that, I was a
drummer, a piano player, a guitarist, and a bass player, and there was
no room for you in any band if you weren’t great at what you did.
I’m trying to find a balance because with plug-ins and software, you can do stuff beyond your wildest dreams, but there’s something about sitting behind a drum kit, playing for five minutes,
and really nailing a performance, and I think that’s overlooked a lot.
Translating that to the keyboard world, you can perform a piece on a
synth whether you’re physically able to play it or not,
but entering the notes and turning the knobs creates a performance
that’s “between brackets”—even if you correct it and speed it up later.
That’s different from many of my colleagues who’ve only made music in
pattern-based computer environments where they copy-paste things they’ve
Was there an “aha” moment when you knew you wanted to play synths and samplers?
That moment was when the Atari ST that had built-in MIDI came out. It was when the Roland D-50
came out. It was when the Yamaha DX7 II came out. It was when Kawai
started making really interesting synthesizers. It was when Korg was
picking up on it and making workstations. You’d go to NAMM, and every
half a year it was insane what the new instruments were capable of. Now,
it’s hard for developers to come up with something that’s actually new.
It seems like most of the effort nowadays is aimed at better and better
emulation of acoustic instruments. In the ’80s, a lot of synths were based on the fact that you could make sounds nobody else could.
Synthesized has four-on-the-floor club stompers
but also through-composed songs and even some ambient tracks. Did making
a record that’s hard to pigeonhole present any challenges?
This album is such the weird sheep of the pack, so to
speak. I know I’m probably taking a commercial hit with it, but I also
do so much film and video game scoring that I was able to treat Synthesized as
a creative challenge, rather than a commercial one of maintaining my
position on the dance floor. I talked about this a lot with one of my
best friends, who was actually A&R-ing this album, and he said,
“What if we kicked your record shelf really hard and a couple hundred
records fell out from the 10,000 that are in there,
and we just saw what landed on top of what? If a techno record from ’95
landed on a record from ABBA landed on an Ennio Morricone record from
’67, what would that bring you?” That basically turned into the title
track. It’s a disco beat. It has a hard techno line you’d hear in the mid-’90s, but it has all these other vocals and it these Ennio Morricone guitars and melody lines.
How did you create the lush string lines and pads in the breakdown to “When Enough is Not Enough”?
Most of the pads were made with my analog synths that I
have. What I really like are the 8500 and the 3500 [modular] synths by
Analogue Systems. They’re monophonic, so to create a chord,
you record several passes. That’s what I did on that track and others.
For instance, “The Art of Luxurious Intergalactic Time Travel” where
chords of the synths were basically recorded three, four, or five times
to create the full chords. One of the beautiful things about programs
like Pro Tools and Cubase is that they make it so easy to do that.
Do you always construct polyphonic parts one voice at a time?
Well, usually I create a rough demo with sounds I like
from plug-ins. So I’ll take a patch called “Warm Pad” or something, and
I’ll play the chord progression I want with rough filter settings. Once
I’m happy with the song structure, I split all those notes out to
individual monophonic lines and start playing with the modular synths to
get a sound that has more character, and then I record pass by pass. And with DAWs, you’re not losing any sound quality, so you can
take pass after pass. You don’t need a wall-to-wall modular setup to do
this, either. It’ll take time, but that’s the beauty of electronic
music —you can do everything yourself and take time to get the sounds
So, do real analog and modular synths always replace plug-ins in your tracks?
Sometimes yes, sometimes no. If you take the Arturia
Prophet and just use the presets, you might as well use the original.
But the cool thing about Arturia is that they can modulate so many
parameters at once. When you start doing all these cross-modulations,
you may get a sound that’s impossible to make with the original. On the
other hand, if you do a quick bass line in [Native Instruments] Massive,
that plug-in has a distinctive quality, and if you want that, you want
Massive. But if you just wanted a percussive bass line, then you may get
more depth, punch, and so on out of the analog world.
There’s a difference between a sound being impressive when soloed and sitting properly in a mix. . . .
That’s right. Sometimes you program a sound with a soft synth, then when you replace it with an analog synth, it sounds too
full and loses a certain quality. I have the Moog Voyager XL and it’s
one of the best synths ever built. But not every bass sound off it is
the best bass for every track—sometimes the track needs something from
Native Instruments FM8. So it’s not like “analog is warm” and “digital
is hard”—it doesn’t work like that. You have to go track by track and
think whether [an analog synth] is going to make your track better.
What do you think about the boost that analog synths have gotten from the EDM scene?
I love the injection the medium has given this whole industry.
If you look at Moog, Analogue Systems, Synthesizers.com, Dave Smith,
and so many others—these are all people that spent a lifetime making
something that, in their own world, was the best possible thing it could
be, and I love that attitude. Go to NAMM and you’ll find somebody who
makes just one pedal. It’s called Something Fuzz and it’s their life’s
work. More of those companies can thrive now because of EDM becoming so
big and electronic producers actually making money and saying, “Let’s
buy a real analog synth and see if I can make something special.”
When creating synth parts or repeating motifs in a
song, do you play in the lines on a keyboard, sequence them with a
pencil tool, or something else?
I consider myself a crappy keyboard player,
but it’s amazing how I get certain things done on a keyboard. There are
many other occasions where I come up with a synth figure or bass line
or arpeggio that I wasn’t able to play on the keyboard. Sometimes I pick
up a guitar, play a lick, and think, “Oh, this
should be the basis of the song.” Sometimes I program a bunch of notes
in the step editor and move them around with surgical precision. I try to find a balance, and here’s the sad part: I think a lot of musicians limit what they compose to what they can play,
especially guitarists and traditional keyboard players. It’s like
language. It’s not that Shakesepare is using a different English than we
are, he’s just putting the words in a certain order that makes his
stuff as great as it is. Sequencers like Cubase and Pro Tools give you
every opportunity to put things in that order.
What challenges are inherent to composing for video games?
Video game scoring is a completely different animal from
film scoring or making your own album. Film is a horizontal experience, a
linear story, so the music is linear. A video
game is a vertical experience that goes up in levels, and the music
needs to be interactive. You not only have to come up with great sounds
and themes, but you really have to know the audio engine a game company
uses—whether it’s the onboard audio engine of any of the consoles or
their own algorithm.
Let’s talk a little bit more in depth. The game starts and
you hear some sound design thing, and when the player gets more active a
rhythm comes in but the sound design is still there. You level up, some ostinato strings come in,
and the sound design recedes to the background. You’re attacked from
behind—big drums and orchestra! You shoot the guy, they stop, it goes back to the sound design stuff,
but the music starts creeping in again. So, one way of dealing with all
that is by mixing multiple 5.1 surround stems that can all be heard at
once but also in various combinations. That’s one system.
Another system uses markers. Basically, there’s one
massive audio file—stereo or surround—and again, the cue starts with the
sound design. When the game gets more tense, calling for a little
percussion maybe, it skips to some marker in the audio file where you
find the sound design plus that percussion. But so you don’t hear the
skip, it first jumps to a transition file, then
to the marker. So for one little level that needs five minutes of music,
you might have to create an interleaved sound file 50 minutes long that
has all the different levels of intensity, the overlay sounds to mask the transitions, and so on.
When you perform live or spin a DJ set, how do you present the music and what gear is involved?
I’ve never been a DJ. I tried to put two records in sync 15 years ago and it took me ten minutes! [Laughs.]
I just play my own music live. I was recently going through some old
pictures—I had an Allen & Heath 40-channel mixer, three racks of
synths and samplers, a couple of [Akai] MPC2000s, and I was sequencing
the whole thing live. It was extremely expensive to fly with, and after
9/11 it wasn’t like you could roll into an airport with all that gear
and say, “I’ve got to be in Vegas in two hours.” Luckily, gear started
getting more efficient around the same time, so I could tour with a
smaller setup. From 2003 I used a Yamaha AW4416,
which was a multitrack hard disk recorder with a built-in mixer, and I
had laptops running sequences alongside it. Later, Ableton Live came
out, and for me, that was the solution.
DJs have also moved to laptops, though. So now, whether you see me playing my stuff or Armin van Buuren or someone DJing,
it looks the same, but there’s a massive difference. I have all my
clips and can cross-combine things. I can take a drumbeat from one song
and combine it with the bass of something else. I can choose different
synth sounds, or mute the vocal, or take a vocal from a different song and do a mash-up on the spot, and that’s the beauty of using Live.
You teach at a university called ArtEZ in Holland. If I were to sign up for your course, what sort of homework would I have?
The course covers electronic music in its broadest form.
Some kids want to be a sound designer or film composer. Some want to do
electronic music as an art form—museum installations and things like
that. Or, they want to be the next big DJ or dance music producer. The
first year, you have required courses and after that you can specialize
yourself. I actually set up the whole curriculum. It’s modeled on
Berklee in Boston in terms of the level of study and the amount of time
you need to invest. We started eight years ago. Now we have students
from all over the world, China, Europe. It’s very inspiring to see these
Who haven’t you collaborated with yet that you’d most like to?
Adrian Belew, David Bowie, and Ennio Morricone.
Those are great choices. Why those three?
If we look back to King Crimson, like, ’81 to ’85 when they did albums like Discipline and Three of a Perfect Pair, to me that’s guitar playing on a whole different level. What Belew did on “Elephant Talk,” when you see him do that live,
you’re like, “Holy s***!” And he just does it with a couple of pedals. I
chose David Bowie because of his unique taste in the music he writes.
We’re looking at a guy who’s influenced music for 40 years at least.
Ennio Morricone is for me the king of film scoring. His experimentation
in the ’50s, ’60s, and ’70s was really out
there. When he first wanted to use electric guitar in a Western, someone
probably told him he was out of his mind. But that wound up defining
the sound of Westerns. It’s like if you and I scored a sci-fi movie with
just an accordion and an Irish flute—that would be awesome! [Laughs.]
Junkie XL’s Favorite Production Techniques
In His Own Words
Key Input Compression. Also called
sidechaining, this started out as a simple tool to make mixes clearer.
Every time the kick drum hit, the bass guitar would get slightly
quieter, or the guitar solo would duck the other guitars a bit. Now in
dance music, every time a kick hits, all the keyboards and basses go
right to zero and then it pumps back up and you get this exciting sound.
It’s been overused, but I still like it. If you want specific
instruments to duck or pump when the kick drum hits, route them all to a
subgroup—keys and vocals are common choices. Put your compressor
plug-in on that subgroup and specify the kick track as its sidechain
Sidechaining Isn’t Just for Kicks. Let’s say that
for whatever reason, you don’t want to mix a lead vocal track too loud
overall, but still want it to be heard well. Use the vocal as the key
input for a compressor on a subgroup for guitars, synths, or whatever
instruments you feel are getting in the vocal’s way. Then, those will
back off when the vocal is present.
Parallel Compression. Say your drum track needs
more energy. Duplicate the track (or group track if it’s several drum
tracks you’ve thrown to a subgroup)—things will temporarily be twice as
loud. Compress the s*** out of the second track until it’s really pumpy,
then turn it down to nothing. Now, fade it in to taste behind the
uncompressed track and you get a very rich sound without losing the
attack or definition of the original.
Mono/Stereo Compression. Some things in your mix are always mono—the kick drum,
maybe or the snare, probably your bass sound. Others are very stereo,
like synths, vocals, or drum overheads. Brainworx makes a plug-in that
can detect and compress mono and stereo information separately. [Look for the BX_XL or the simpler BX Boom plug-ins –Ed.]
Put this on your mix bus or drums subgroup. You need to experiment a lot, but this can be a powerful tool to clean up mixes.
Multi-Band Compression. I usually use TC Master X for this. My low band goes
up to 65 or 70Hz, my midrange band up to 2kHz, and the high band is
above that. This way, I can compress frequencies where things starts
piling up and need cleaning, while keeping the overall mix sounding
tight and loud.
Parallel EQ. This works like parallel compression,
except it’s for a track has too little of a frequency you want or an
instrument that needs more excitement. Duplicate the track, solo the
duplicate, find the frequency you rlike, and don’t be afraid to give it
an extreme boost, as you’re just using this track for fading in behind
Brainworx also has plug-ins that let you EQ mono signals separately from
stereo ones in the same program. So if you have a stereo drum loop and
think the kick and snare need more punch, you can EQ that in without
ruining the overall sound.
Creating Bass Drums. To get unique sounds for the
all-important kick drum, I layer different drum sounds and synth
patches. To cover the sub-bass end, I use something like a TR-808, 909,
or similar kick sound. I put samples on top, whether it’s me hitting an
acoustic kit or a sample from a record, and therefore I create a
character but I have the low end to support it. I usually run three or
four channels worth of this layering into one subgroup and then use the
compression and EQ techniques we’ve discussed here.
What keyboard players have specifically inspired you?
I’m a massive fan of Jean Michel Jarre and Vangelis—the Blade Runner score is one of my all time favorites. If you’re talking about a player—someone
who sits down at the keyboard and just amazes you—there’s Keith
Jarrett. When you see him play acoustic piano and do all the stuff he
does, it’s mind-boggling. But to me who was really interesting was Herbie Hancock when the album Future Shock and the single “Rockit” came out. For a guy with a jazz background like his to do an electronic album like that, and for it to become the blueprint for all the “beat” guys? That was amazing!