MIDI Reloaded: Why MIDI Still Matters, Part 1

March 12, 2014
Three decades ago, keyboards were wedded to tone generators in the same enclosure, and digital audio recording were expensive and rare—but then MIDI appeared. MIDI is a digital language by which synths, sequencers, and controllers talk to one another. Among its early benefits was separating the keyboard controller from sound generators so that a single keyboard could control multiple, less expensive, and typically rack-mounted tone modules. MIDI also offered a high-quality “multitrack” recording experience to composers and working musicians who couldn’t have afforded their own multitrack tape machines. Today you can easily record 100 tracks of digital audio on a basic laptop, so MIDI may seem irrelevant in the studio. Yet MIDI remains not only viable, but valuable, because it lets you exploit today’s studio in ways that digital audio still can’t.
 

MIDI basics. Digital audio is a representation of sound itself—the numbers involved are eventually converted back into a voltage that can drive an amp or powered speaker. MIDI data is a set of instructions that in turn tell something that can make sound—like a hardware or soft synth—what and how to play. The classic analogy is the old-fashioned player piano: The paper piano roll isn’t a recording you can play back in the absence of the piano itself; instead, the holes punched in it are read by a mechanism that then tells the piano’s hammers what strings to hit.

Another good analogy is word processing. When you type on a QWERTY keyboard, it generates digital data representing letters, spaces, punctuation marks, and so on. This data flows into your computer, which can then translate the data into pixels on a monitor so you can see what you’ve typed, cut and paste words, email it, and so forth. A MIDI musical keyboard translates your playing into data that flows out from a MIDI output, then travels over a serial connection (more on this later) into a synth, tone module, or your computer. The data doesn’t equal the music anymore than the data from your QWERTY keyboard equals your native language, but in both cases, the former issues marching orders to a system that then creates the latter. 

Having a set of instructions that can be saved and edited separately provides all kinds of benefits. Before discussing those, let’s cover a few more MIDI basics.


The MIDI interface. MIDI’s interface protocol differed from the ones used at the time for printers and other peripherals. It used a five-pin DIN connector (see Figure 1 at left) for the MIDI output that transmitted data from your keyboard or computer, and another DIN connector for the tone module or computer MIDI input that received data. Early personal computers needed an external interface box that had MIDI connectors and converted the MIDI data to and from a standard computer interface protocol. (Atari’s ST series and Yamaha’s CX5M were two exceptions; they had built-in MIDI connectors.)

Most modern MIDI keyboards interface with computers over USB; its advantages include eliminating a separate interface and higher-speed data transfer. For older synths, DIN-to-USB converter cables can provide a computer interface.


MIDI channels and ports. MIDI sends data over any of 16 channels, and tone modules or virtual instruments can “tune in” to a specific channel. For example, suppose you assign three different synthesizers to receive data over MIDI channels 1, 2, and 3. Changing the transmission channel on your controller keyboard (or on a MIDI track’s output in your DAW) chooses which synthesizer will play. For layering, you can set synths to the same channel. If your synth can split or layer, this works by the keyboard transmitting on different MIDI channels to the sound engine. 

Eventually 16 channels weren’t enough, so interfaces (and even some keyboards) started sprouting multiple ports—a four-port interface can send data over 64 (four times 16) different channels. One DIN connector equals one port, but a single USB connection can handle multiple ports simultaneously. 

Another development, the multitimbral synth, plays back different sounds that respond to different channels. This is ideal for computer-based multitrack MIDI recording and playback because data recorded on one channel can drive (for example) a bass sound, while another channel plays back a piano sound, another triggers drums, and so on. 

Today, external tone modules have been upstaged by software instruments (see Figure 2 at left), but these still take their orders from MIDI. Now let’s consider some advantages of MIDI over digital audio.


Songwriting. If you start a composition with MIDI instruments, you can transpose pitch and alter tempo easily. Transposing simply tells the target synth to trigger notes that are a certain number of semitones higher or lower. For this reason, there are none of the artifacts associated with audio-based time or pitch stretching. Once you nail the right tempo and pitch for your arrangement, you can then start recording audio parts such as vocals. You can set up a multitimbral MIDI instrument template with different instruments on different channels, so as to get going quickly if inspiration strikes. You can also record “placeholder” parts to keep the creative flow going and not concern yourself with recording the perfect part, because MIDI allows for . . .


Deep editing. Digital audio allows for broad edits, like changing levels or moving sections around, and editing tools such as Melodyne are doing ever more fine-grained audio surgery. But MIDI is more fine-grained still: You can edit every characteristic of every performance gesture: dynamics, volume, timing, the length and pitch of every note, pitch-bend, and even which sound is being played. MIDI data can tell a piano sound what to play, or if you change your mind, a Clavinet patch. With digital audio, changing the instrument that plays a given part requires re-recording the track.


Different types of editing.

 
 
“Piano-roll” editing is the most common way to edit, but you can also edit on a musical staff view, a grid optimized for editing drum parts, a spreadsheet-like list of every single MIDI event in a part, and in some cases, even guitar tab (see Figure 3 above).


MIDI plug-ins.

These process MIDI data (see Figure 4 at left). They can set minimum and maximum velocity limits, constrain notes to a particular scale, repeat data for echo effects, change the order of notes in a phrase subtly or drastically, and much more. Not all DAWs accommodate MIDI plug-ins, but most of the majors do.


Mix and synth automation. In addition to notes, the MIDI spec includes dedicated, numbered controller data options that can control panning, level, modulation, sustain, and even amount to extra hands turning your synth’s knobs as you play. With a multitimbral instrument, controller signals can alter the mix by changing instrument levels on the fly.


These are the basics, but MIDI can do much more—as you’ll find out in part 2 next month.

Craig Anderton is the former executive editor of our sister magazine Electronic Musician, one of the original gurus of music technology how-to, and currently the "Chief Magic Officer" at Gibson Brands.
Keep up-to-date on the latest news
Get our Free Newsletter Here!

You Might Also Like...

No Records Found.
Show Comments

These are my comments.

Reader Poll

What best describes your dream job?





See results without voting »