(This article originally appeared in the November 1979 issue of Contemporary Keyboard magazine.)
Photo Credit: Sibyl Heishman
ONE OF THE POTENT forces changing the face of music during the past 20 years has been the synthesizer. But the musician who wanted to use a synth for live performance paid a price for the sheer size of the tonal palette suddenly placed at his or her fingertips—namely, the amount of time it took to tweak up the forest of controls on the instrument's front panel. At best this was an annoyance; at worst, it spelled disaster.
Manufacturers tried to make their instruments easier to play live by building in presets. With a preset, you push a button and the instrument instantly calls up from its memory whatever tone color the manufacturer thinks or hopes you will find most useful. The preset approach is the best one for some people, just as the completely variable front panel is others' first choice. But something more was clearly needed.
It's only been in the last year or two that the next step has been taken. Microprocessor technology evolved in the computer industry has been successfully applied to keyboard synthesizers, with the result that for the first time we have instruments on which the user can set up his or her own personal tone color, push a button, and have the machine remember all the knob settings exactly for later recall, again with the touch of a button. Needless to say, keyboard players have been quick to spot the advantages of this system, and the first company to market a completely programmable polyphonic synthesizer, Sequential Circuits of Sunnyvale, California, has not been hurting for orders for their groundbreaking Prophet-5.
Sequential Circuits is the brainchild of 29-year-old Dave Smith. Like many of the pioneers in this field, Dave spent some time in his early years juggling twin interests in music and electronics. By the time the San Francisco native graduated from the University of California at Berkeley with a B.S. in computer science and electrical engineering, he had already done some gigging as a bassist and guitarist with a progressive rock trio. For the next five years Dave paid his dues working for Lockheed, General Electric, and several smaller companies including Standard Microsystems, Signetics, and Diablo Systems, where he got acquainted with microprocessor technology.
It wasn't until 1972, when he saw and immediately bought a Minimoog, that Smith began to consolidate these two areas. He began making tapes with his new synthesizer and a four-track tape deck, but before long he had grown dissatisfied with that setup. "Granted, there are a million things you can do with the Minimoog that most people don't even get near touching, but still everything is pretty much pre-patched, and I wanted to start doing more," he explains. Starting from scratch, Dave soon built his own analog sequencer, which also functioned as a waveform generator when interfaced with a keyboard.
"When I finished it," he relates, "I realized that maybe someone else would want one, and that I might try to sell one. I guess that's how Sequential Circuits officially got started." In 1974 the company name was trademarked, and Dave, working literally out of a closet in his one-bedroom apartment, began marketing and improving his Model 600 sequencer in his spare time. By late 1975, with his lab spilling over into the extra bedroom of another apartment, he was building the digital Model 800 sequencers. After a while he was renting workspace in Sunnyvale, in the area known as Silicon Valley, the heart of California's computer country, plugging in a telephone answering machine to take orders, and hiring assemblers. But it wasn't until April 1977, just after designing his synthesizer programmer, that Smith quit his regular job and began devoting his full energies to Sequential Circuits.
Dave seemed to have carved out a comfortable niche for himself as a producer of sequencers and other synthesizer accessories, but later in 1977 all that changed. During the early summer, SSM company (formerly Solid State Music) produced a set of synthesizer IC chips—a VCO, a dual VCA, a voltage-controlled filter, and a voltage-controlled transient generator. These ICs made it possible to produce a complete synthesizer voice at a substantial savings in space and cost. When it appeared after several months that no synthesizer manufacturers were about to incorporate these chips along with microprocessor technology into their products, he suddenly decided that Sequential Circuits should fill this beckoning hole in the market. With that, Smith took his plunge into the synthesizer world.
Zeroing their sights in on the January 1978 NAMM (National Association of Music Merchants) show in Los Angeles, Dave and Barb Fairhurst, now vice-president at SC in charge of operations, began working together on the first incarnation of the Prophet. Dave did the electronic design, taped up the circuit boards, prepared silk-screen layouts, programmed the micro-computer, and handled all the mechanical work and engineering, while Barb took care of pricing, purchasing, bookkeeping, and sales. They worked frantically against the clock for four months, finally coming up with a prototype at the last minute—so close, in fact, to the deadline that they were a few minutes late in rushing their product to the NAMM display floor after the show had opened.
Compared to the present-day Prophet, the prototype was not entirely up to scratch; there was no edit mode, and, as Dave now remembers, "there were glitches galore." Nonetheless, it was one of the hits of the NAMM show, and orders poured in so quickly that it took all of 1978 and most of 1979 to catch up with the demand. Smith's gamble paid off, and overnight the state of the art for synthesizer technology took a great leap forward.
Today there are almost fifty full-time workers at Sequential Circuits, and finishing touches are being applied to the construction of new and expanded facilities. Meanwhile, such trend-setting keyboardists as Rick Wakeman, Herbie Hancock, Josef Zawinul, George Duke, and Tony Banks have been familiarizing audiences around the world with the Prophet, and extolling it offstage as well.
And back in Sunnyvale, Dave Smith, the man behind the Prophet and the revolution it helped inaugurate, took time to share with CK his thoughts on the design of his instrument, the new wave of digital-analog hybrids, the importance of programmability for musicians, and the introduction of microprocessor technology into synthesis.
* * * *
COULD YOU DESCRIBE your general working approach as a synthesizer designer?
Well, this could be taken the wrong way, but I have either a good or a bad tendency to not always research everything completely to the very last corner of the universe before I start a project. That's why I never go out of my way to check out everything in the market. If you asked me right now what a Brand X or Y synthesizer did, I'd probably be unable to give you an exact answer, because I'd rather just come up with ideas out of my own head, using my own judgment as a musician and engineer, than get distorted ideas of what synthesizers should be, based on what other people have done. When I first designed the Prophet, there were all of probably three or four people in the world who knew about it until it was announced. It was more or less a top-secret project, and we didn't ask a whole lot of musicians, famous or infamous, what they thought should be on the instrument. We just decided which way to go, not arbitrarily, but based on our own experience as musicians and engineers. And given the success of the Prophet, I'd have to say that that was the right way for us to go.
Wouldn't a different set of problems come up, though, with such a totally in-house approach?
Yes and no. There is a limit, of course. You don't want to shut yourself off and never listen to anybody, because we do regularly take ideas from people, but I just have a feeling that if you get too many people involved, it's the old "too many cooks" syndrome. It ends up being nothing instead of everything. You come out with one, and these people like it but other people don't, so you go through and do some more changes, and it all has two effects: One is that it takes too long to get it out on the market once and for all as a product, and the other is that research and development has to cost you a lot of money, which has to be recovered in selling the product. If you can develop something from start to finish in four months and get it out the door, it's got to be cheaper than taking two years and seven levels of redesign to do it.
Before you got into synthesizers, you were designing accessories with integrated circuits but no microprocessors. Why was that?
The ICs were more compact and more cost-effective. And also, there were no microprocessors back then, so you had no choice; ICs were the best way to do things.
When the microprocessor came out at first it was a little too slow and expensive to use well; it used to cost $300 or $400 for one microprocessor. Well, they're down to eight dollars now. So when we did the Prophet, the only reason I went with microprocessors was that it greatly simplified the design to handle some of the functions with software. If I had to try to do all of the voice assignments, the programmability, and all these things just using the hardware —discrete ICs—it would have taken at least twice as long to get it done.
How would you explain the difference between hardware and software?
On a simple level, software can be thought of as a computer program. It's not a physical thing, because in the classical sense it's punched up on IBM cards and read into a machine. Hardware is anything you can touch. It's a wire, an IC of any kind, a capacitor or resistor, the whole circuit board, something visible and electronic —all that could be termed hardware. Hardware does one thing and that's it; a special IC or transistor will do only what it's supposed to do and nothing else. In some ways the processors are really stupid in that they can only do one thing at a time. They can look at one switch, one key, the keyboard, an LED, or they can send a gate out or send a control voltage to the analog section, but they only do one at a time. What saves them is that they do them really fast.
In the Prophet the loop time is about seven or eight milliseconds. That's what it takes for the microprocessor to check everything being input to it and output the corresponding control signals. When you hit a key, it has to wait until the computer sees what key was hit, then it assigns one of the five voices to that key by turning on the gate and putting the right control voltage to the oscillators. Then, by controlling the analog section, it produces a note. Basically there are two sections of the Prophet. There's the computer part that does all the controlling, and then there's the sound part, which is all analog circuitry. There's nothing tricky about it; it's not much different from sticking five Minimoogs in one box and having each controlled separately. There are five different monophonic synthesizers in there; the difference is that they are programmed. Based on how many notes you have down and which notes they are, the computer will decide which one of those five synthesizers will play which note at what time.
How do the microprocessor and the program memory modules work together?
Even though all the sound generation is analog and all the control is digital, some-where there has to be a crossover, and when that happens you're stuck with having just so much accuracy. We put a lot of accuracy in all the pots and all the controls of the Prophet, so with most of them you can't tell that when you turn on a pot the tone is actually stepping, rather than going smoothly. The only place you can tell is on the initial oscillator frequencies, because they step in semitones, but that was of course done on purpose. So basically the computer always has the status of the whole machine stored digitally, and one of the things it does in its loop is to update everything in analog fashion.
In other words, the pots send out digitally stepped information.
Yeah. Nothing goes directly from the front panel to the sound generation. It all goes to the computer first.
How does the computer output turn into a control voltage?
Through a device called a digital-to-analog converter, which takes a digital number in binary form and converts it into a voltage. That has to be done super-accurately, because the oscillators are very picky about small changes in volts. One millivolt can make a tuning difference.
You said you began using the microprocessor because it became cost-effective. What other reasons were there?
It allowed for enormous flexibility. We were able to do things like put in an edit mode, which we couldn't do in the programmer because it would have added too many ICs. We could have done it with a lot of hardware, but it would have been a real pain, whereas in the Prophet we were able to do it almost for free by just changing the program. We could do other functions easily with it too, like the automatic tuning.
What does that do?
It scales the two oscillators as well as tunes them to each other, so by the time it's done they're real close to each other. They're never going to be perfect—voltage-controlled oscillators can never be perfect —but if you get an occasional beat or two between them, that's normal, and in most cases to be desired. Some people don't understand that with synthesizers. They even complain about hearing phasing, because they're just not used to it. But if you think about acoustic instruments, that's the way they are, and that gets us into the whole organ vs. synthesizer thing. Shall we open that can of worms?
Be our guest.
Well, this is a subjective opinion, because everyone's opinions usually depend on what they're marketing [laughs], but I think there are basically two classes of keyboard instruments around: those that have separate tone generators or sound sources, and those that don't. Pianos, pipe organs, and things like that are in the first category, but there are only two instruments in the world that I can think of in the second one, and those are electronic organs and electronic pianos.
Electronic, as opposed to electric, pianos?
Right. Electric pianos are different because they have individual tines or reeds on each key. Now, I think synthesizers should be in the first category. A monophonic synthesizer will have a completely separate tone generator for each note played, and it can play only one note at a time. In keeping with that norm, when you play more than one note on a synthesizer, I think each note should be completely independent. If they're not, then the instrument is over in the other group with the electronic organs. There are a lot of people selling what they call synthesizers, but which aren't really much more than electronic organs with a little bit of synthesizer processing.
Is the terminology important, or the fact that there's a difference in the kind of sound produced?
Well, I think one leads to the other. The sound of a divided-down network is not the same as that of a completely independent network. I think that's definitely one of the things that gives the Prophet its warm and natural feeling, if you want to use words like that. Though it has five voices and they're basically trimmed on the inside to be identical, they're never exactly the same. One is going to be just a touch brighter, with a slightly longer or shorter attack, and that combination of playing a bunch of notes that are basically the same but still a little bit different really gives a fat sound, to use another overused word.
Let's get into the importance of the programmability element in the genesis of the Prophet.
Right. I can't emphasize that enough. It seemed so apparent, since the very first synthesizers came out, that the end results of their appearance would be something you could play more than one note at a time on that was programmable. That's why it's kind of weird to even now hear people talk like it's an expensive feature that shouldn't be there.
When you say "programmable," you are talking about the ability to create your own patch and store it in the instrument's memory, from which it can be instantly recalled.
Yeah, even after the power has been turned off. People seem to be getting into two levels of that, one that allows for complete programmability and one that doesn't. To us it makes no sense to have only partial programmability. The technology is here, and there is no reason why someone can't do it all. I don't think anybody should come out with a synthesizer, unless it's super-cheap, that's not programmable.
Because it takes a large problem off the musician's shoulders. There are different levels of people who play synthesizer, and we get a lot of each one. There are people who know nothing about what they're doing, who buy our instrument and never change the presets. There are a few people, who really know what they're doing, who immediately change all the presets and learn how to use every part of the instrument. Then there's a whole class of people in between, who know enough to get by. They want to be able to play with it, but they don't have the full realization of what exactly is going on. These are the same people who might say—and you've probably heard it many times—"I got this great patch on my Minimoog or Odyssey or whatever yesterday, and I wish I could get it back." Well, full programmability leads to being able to have more creativity, because somebody can set patches almost randomly and summon them up later. Whether or not they know what they're doing, they can hear what they're doing as they add or subtract parts of the sound. But the bottom line is that when they get something they really like, they push a button and it's there forever—or essentially forever, since the batteries have a ten-year life. That rids the musician of having to write down and remember every pot and switch perfectly.
Wouldn't that encourage a lot of people to blunder their way through programs without developing the slightest creative insight into their instruments?
That's true. People are going to get lazy, but you can also argue that those people in the in-between category who would do that probably never would bother to learn synthesizers anyhow. A lot of people who are using the Prophet have never owned a synthesizer before. If they used a Minimoog they'd probably stick to the two or three standard sounds that everybody uses on them, and that would be an automatic thing too—"Put the filter here, the resonance there, and you get this sound." I guess it's just easier for them not to know.
To which of your categories do you assign people who never touch the presets on the Prophet? Are they playing synthesizer or organ?
They're playing a preset organ, that's right. I can't argue that fact.
Does that bother you?
It does, but what bothers me most is getting a machine that's been in the field for a while that still has the original programs. Those programs were pretty good, but we try to point out that they are a starting point only. We give them a couple of string sounds and some nice organ sounds, but I can't believe that somebody wouldn't say, "Well, that's a nice organ, but it's too bright," or "Those strings are too mushy; they need a faster attack." After all, people can edit these patches, changing just one parameter and keeping the rest, still without knowing what they're doing. We encourage people to tailor their programs around them, rather than tailoring their playing around the programs that are in the machine.
How do you feel about synthesizers that pursue polyphony in the classic sense, with different timbres controlled by the keyboard?
That's valid. I should make it clear that one of my basic philosophies is that anything is valid that can be used to make music. If you figure out a way of getting razor blades and tissue paper and hamsters to make an interesting sound, it's valid.
Why did you decide to make the Prophet homogeneous in texture?
Two reasons: cost effectiveness, obviously, and ease of use and understandability. If you found out how many people use an Oberheim Four-Voice regularly to get those different timbres simultaneously, it'd be real small. The problem with multiple voices is that unless you play it monophonically, in which case it is no more than a monophonic synthesizer with a lot of voices, it can be very limiting because you have to know exactly what you're playing to get the right voice at the right time. I'm not bad-mouthing it, but as a practical matter, it takes another technique, and most people would rather stack two keyboards than learn the new technique.
Given the thrust for non-modular polyphonic instruments in the synthesizer world, is there any more need for modular synthesizers?
Oh, sure. It's still a completely different thing. Granted that you can do a whole lot of things with performance-oriented synthesizers like the Prophet that are not performance-oriented, you still can't take oscillator two and modulate a transient generator in the Prophet, because there's no path for it, whereas it's still wide open in a modular system. Modular systems are also great for instrument design research. You can patch up things before hard-wiring them. And they are still good in the studios; because there are still things you just can't do right on any performance synthesizer. Some of the sounds Wendy Carlos has gotten on some of her records just can't be made on a preset machine, as far as either realism or complexity goes. But it's also going the other way. There are a lot of things that people will go to the trouble of setting up on a modular that you could save a lot of time doing on the independent polyphonic. If you can get a decent horn sound, why not do four or five tracks at once, rather than sit there beating your head against the wall?
Is it possible to make a programmable modular synthesizer? Would anyone be interested, and would it cost a million dollars?
Yes, yes, and yes. I'd like to build one for my own use, but I don't think it'd be commercially practicable, because of the cost and complexity. But the way to set up something like that is to have a wide-open and buss-oriented system with modules that plug in. Everything would go in and out of these modules through VCAs, and it would all be totally voltage-controlled. You'd have a large series of these VCA-controlled busses where either signals or a control voltage can go across. Then that all gets programmed, so you can essentially take any signal or control voltage as a signal, and connect it to anything else in any combination. After that you could add modules and do what you wanted, as long as you left the module in its position so that the computer always knew where the control voltages were going. You could get the best of both worlds, but it would cost a lot and not many people would understand it.
More and more people are using their own microcomputers as control devices for hybrid home-brew synthesizers. How big will this trend become?
I don't think it will ever get that widespread, because programming is not the kind of thing that people will take the time to learn. It's easy in some ways, but you're always going to have to learn a specific process of entering and processing information. Say you're doing some sort of creative musical thing where you are going to have to know how to tell it what note to play: How do you tell it timing? How do you tell it to edit, or to add or subtract?
What if the computer comes with a manual specifically describing how to hook up with a certain synthesizer?
Well, then you're just talking about a programmer, rather than a hobby computer being used as a programmer. As soon as you make it less simple the typical musician is not going to want to bother with it. There are probably very few musicians like Roger Powell or Larry Fast, who do have a complete grasp of the technological end. Most of them are more concerned about seeing the keyboard there than in learning what a microcomuter does. So there will be a little trend for hobbyists who build their own systems, but there will be more engineering people than serious musicians involved.
Have you been disappointed by the way most musicians have utilized completely digital technology?
Yeah, but partly because the digital thing is just not here yet: It costs too much, the instrument interface so far has been horrible, and it's a new thing that hasn't been fully explored. I haven't heard any sound qualities from a digital machine that a Minimoog can't run circles around.
Certainly you've heard more complex sounds from digital than from analog machines.
But not interesting enough sounds to warrant the cost and hassle. I don't know if we’ll ever get rid of analog altogether; tube amps are still around, and ten or twenty years ago nobody would have thought that would be the case. But I do realize that digital is getting cheaper and more prevalent in the general-purpose world of electronics. At least people are coming out with real digital products now. We're at a kind of second stage with that, the first stage being all that university stuff with million-dollar computers cranking out weird noises. I'm not worried about the cost thing, but I do worry about sound variation and quality, and even more about the human interface. In many cases the musician will have to learn a whole new jargon and set of processes. Digital will have all the obvious things—most of them will be polyphonic, with built-in memory and programming—but it's the interface and the sound that will have to be researched. We're keeping our hands in it, because naturally if we can scoop everybody, we will [laughs], but I don't think it's happening right now. For the next few years hybrid, or a combination of the complete control capabilities with the sound capabilities and qualities, will be the way to go.
Can you discuss any Sequential Circuits projects?
Not really, except for the new 10-voice Prophet. Its sequencer will be able to store up to 2,500 notes, with the hard copy to store it on; it has a built-in cassette deck, and you can change programs as you play. We probably won't be staying just in synthesizers; unfortunately, that's not a wide-open market. Before the Prophet came out there was room for that type of instrument, but until somebody makes a big change in digital, there's really nothing else to do. You can build a programmable polyphonic synthesizer, but all you're going to change is how many oscillators you have, what kind of filters you put in, or something like that. There is a good market for coming out with a super-cheap monophonic non-programmable synthesizer that sells for maybe a couple of hundred dollars. You can open up a whole educational market with that, selling 30 of them to a school where they can teach little kids how to use synthesizers.
What has been the main effect of bringing programmability and computer technology into the synthesizer market?
Well, it has put synthesizers in a lot of hands that wouldn't have played them, and it has allowed a lot of people to do a lot of real nice things. You can go back to the old argument of it being invalid to imitate horns on a synthesizer, but it is valid to be able to play them polyphonically. And you can do it cost-effectively too. But the thing that really sells any instrument is what it sounds like. If it did all those magic things but didn't really sound all that hot, nobody would get it. It's that simple.