Circuits that Generate Sounds

Electronic Music Machines


Origins of Electronic Sounds


The analog voltage-controlled synthesizer is a collection of waveform and noise generators, modifiers (such as filters, ring modulators, amplifiers), mixers and control devices packaged in modular or integrated form. The generators produce an electronic signal which can be patched through the modifiers and into a mixer or amplifier where it is made audible through loudspeakers. This sequence of interconnections constitutes a signal path which is determined by means of patch cords, switches, or matrix pinboards. Changes in the behaviors of the devices (such as pitch or loudness) along the signal path are controlled from other devices which produce control voltages. These control voltage sources can be a keyboard, a ribbon controller, a random voltage source, an envelope generator or any other compatible voltage source.

The story of the analog synthesizer has no single beginning. In fact, its genesis is an excellent example of how a good idea often emerges simultaneously in different geographic locations to fulfill a generalized need. In this case the need was to consolidate the various electronic sound generators, modifiers and control devices distributed in fairly bulky form throughout the classic tape studio. The reason for doing this was quite straight forward: to provide a personal electronic system to individual composers that was specifically designed for music composition and/or live performance, and which had the approximate technical capability of the classic tape studio at a lower cost. The geographic locales where this simultaneously occurred were the east coast of the United States, San Francisco, Rome and Australia.

The first electronic instrument to actually be called a synthesiser was the RCA synthesiser of 1956. Designed by Harry Olsen and Hebert Belar, 2 electronic engineers working for RCA Princeton Laboratories. The synthesiser part of the system was based on 12 vacuum tubes in the Mark 1 and 24 in the Mark 2, and most of the functionality of the synthesiser was controlled by a computer switching banks of relays to vary the resistances ( the equivalent of digitally controlled Pots today). A programmable Monosynth is no great achievement today but back in the 50's it was amazing. The computer also controlled a sequencer which could play music and control the sound in real time, the audio output being recorded on a Laquer Disc, like cutting your own vinyl record. Input for the computer was on punched paper rolls. The initial concept of the RCA system was to generate random music using patterns and styles of dead composers, folk songs, anything that had pattern and form. In the final analysis, the intent to produce new and useable music from elements of the old didn't really work. A grant from the Rockefeller Foundation enabled Colombia University to lease the machine from RCA and set up its electronic music department, one of the first, and was very influential at the time for such composers as Milton Babbit and Vladimir Ussachevsky.

Other developments of the time included Daphne Oram's novel technique of 'Oramics', which used drawings on 35-mm film to produce sound, a system that was employed by the BBC Radiophonic Workshop for several years. In 1962, Daphne Oram presented Oramics, the project that consumed so much of her time and resources. She received two consecutive Gulbenkian Foundation grants in the region of £3,500, a sizeable sum in the 1960s, to develop her research. Daphne said of Oramics, “I visualize the composer learning an alphabet of symbols with which he will be able to indicate all the parameters needed to build up the sound he requires. These symbols, drawn…freehand on an ordinary piece of paper, will be fed to the equipment and the resultant sound will be recorded onto magnetic tape.”

The concept of drawn sound was not new. The technique of drawing patterns by hand onto the thin strip at the edge of 35mm film had been around since the 1920s. Russian film makers Arseny Arraamov and Yevgeny Sholpo created soundtracks from intricate ink drawings on thin strips that were 1.93 – in width. Norman McClaren used drawn sound in many films. South African electronics engineer , Johannes van der Bijl, working in the 1940s, developed a method of recording sound using photographed waveforms on 35mm film, which were passed across and interrupted a steady beam of light, and thus generated an electronic impulse to represent sound but the Oramics system reveals a more lucid, free and at the same time more precise analogue of sound waveforms.

Peter Manning noted “The ability to draw the dynamic shaping of pitched events not only allows a readily assimilated audio-visual correlation of specifications, it also overcomes the rigid attack and decay characteristics of electronic envelope shapers”.

The concept of modularity usually associated with the analog synthesizer must be credited to Harald Bode who in 1960 completed the construction of his MODULAR SOUND MODIFICATION SYSTEM. In many ways this device predicted the more concise and powerful modular synthesizers that began to be designed in the early 1960’s and consisted of a ring modulator, envelope follower, tone-burst-responsive envelope generator, voltage-controlled amplifier, filters, mixers, pitch extractor, comparator and frequency divider, and a tape loop repeater. This device may have had some indirect influence on Robert Moog but the idea for his modular synthesizer appears to have evolved from another set of circumstances. Although American inventor Donald Buchla did create a commercially available synthesizer, most instruments of the 1950s and early 1960s were, due to their vast size and complexity, confined to academic institutions and studios. The wider explosion of interest in the synthesizer was the credit of the man whose name became synonymous with the instrument – Robert Moog.

In contrast to Moog’s industrial stance, the rather countercultural design philosophy of DONALD BUCHLA and his voltagecontrolled synthesizers can partially be attributed to the geographic locale and cultural circumstances of their genesis. In 1961 San Francisco was beginning to emerge as a major cultural center with several vanguard composers organizing concerts and other performance events. MORTON SUBOTNICK was starting his career in electronic music experimentation, as were PAULINE OLIVEROS, Ramon Sender and TERRY RILEY. A primitive studio had been started at the San Francisco Conservatory of Music by Sender where he and Oliveros had begun a series of experimental music concerts. In 1962 this equipment and other resources from electronic surplus sources were pooled together by Sender and Subotnick to form the San Francisco Tape Music Center which was later moved to Mills College in 1966. Because of the severe limitations of the equipment, Subotnick and Sender sought out the help of a competent engineer in 1962 to realize a design they had concocted for an optically based sound generating instrument. After a few failures at hiring an engineer they met DONALD BUCHLA who realized their design but subsequently convinced them that this was the wrong approach for solving their equipment needs. Their subsequent discussions resulted in the concept of a modular system. Subotnick describes their idea in the following terms:

“Our idea was to build the black box that would be a palette for composers in their homes. It would be their studio.The idea was to design it so that it was like an analog computer. It was not a musical instrument but it was modular...It was a collection of modules of voltage-controlled envelope generators and it had sequencers in it right off the bat...It was a collection of modules that you would put together. There were no two systems the same until CBS bought it...Our goal was that it should be under $400 for the entire instrument and we came very close. That’s why the original instrument I fundraised for was under $500.

Buchla’s design approach differed markedly from Moog. Right from the start Buchla rejected the idea of a “synthesizer” and has resisted the word ever since. He never wanted to “synthesize” familiar sounds but rather emphasized new timbral possibilities. He stressed the complexity that could arise out of randomness and was intrigued with the design of new control devices other than the standard keyboard. He summarizes his philosophy and distinguishes it from Moog’s in the following statement:

“I would say that philosophically the prime difference in our approaches was that I separated sound and structure and he didn’t. Control voltages were interchangeable with audio. The advantage of that is that he required only one kind of connector and that modules could serve more than one purpose. There were several drawbacks to that kind of general approach, one of them being that a module designed to work in the structural domain at the same time as the audio domain has to make compromises. DC offset doesn’t make any difference in the sound domain but it makes a big difference in the structural domain, whereas harmonic distortion makes very little difference in the control area but it can be very significant in the audio areas. You also have a matter of just being able to discern what’s happening in a system by looking at it. If you have a very complex patch, it’s nice to be able to tell what aspect of the patch is the structural part of the music versus what is the signal path and so on. There’s a big difference in whether you deal with linear versus exponential functions at the control level and that was a very inhibiting factor in Moog’s more general approach.

"Uncertainty is the basis for alot of my work. One always operates somewhere between the totally predictable and the totally unpredictable and to me the “source of uncertainty,” as we called it, was a way of aiding the composer. The predictabilities could be highly defined or you could have a sequence of totally random numbers. We had voltage control of the randomness and of the rate of change so that you could randomize the rate of change. In this way you could make patterns that were of more interest than patterns that are totally random.”

While the early Buchla instruments contained many of the same modular functions as the Moog, it also contained a number of unique devices such as its random control voltage sources, sequencers and voltage-controlled spatial panners. Buchla has maintained his unique design philosphy over the intervening years producing a series of highly advanced instruments often incorporating hybrid digital circuitry and unique control interfaces.

The Moog synthesizer

Moog (pronounced to rhyme with 'vogue') had always been interested in electronic music, having built theremins with his father throughout the 1950s. Inspired by experimental composer Herbert Deutsch, Moog designed the circuits for his first synthesizer while studying for his PhD in Engineering Physics at Cornell University, where he was a student of Peter Mauzey, an RCA engineer who had worked on the Mark II Music Synthesizer.

In 1963, MOOG was selling transistorized Theremins in kit form from his home in Ithaca, New York. Early in 1964 the composer Herbert Deutsch was using one of these instruments and the two began to discuss the application of solid state technology to the design of new instruments and systems. These discussions led Moog to complete his first prototype of a modular electronic music synthesizer later that year. By 1966 the first production model was available from the new company he had formed to produce this instrument. The first systems which Moog produced were principally designed for studio applications and were generally large modular assemblages that contained voltage- controlled oscillators, filters, voltage-controlled amplifiers, envelope generators, and a traditional style keyboard for voltage control of the other modules. Interconnection between the modules was achieved through patch cords. By 1969 Moog saw the necessity for a smaller portable instrument and began to manufacture the Mini Moog, a concise version of the studio system that contained an oscillator bank, filter, mixer, VCA and keyboard. As an instrument designer Moog was always a practical engineer. His basically commercial but egalitarian philosophy is best exemplified by some of the advertising copy which accompanied the Mini Moog in 1969 and resulted in its becoming the most widely used synthesizer in the “music industry”:

“R.A. Moog, Inc. built its first synthesizer components in 1964. At that time, the electronic music synthesizer was a cumbersome laboratory curiosity, virtually unknown to the listening public. Moog demonstrated his first synthesizer at the AES (Audio Engineering Society) convention in 1964. Like the RCA machine, Moog's synthesizer was a flexible modular design in that the instrument comprised several different sections, or modules, each with a different function, which could be patched (connected) together in different combinations.Moog's first design still required a great deal of programming time, although it was smaller, lighter and more flexible than the Mark II Music Synthesizer.

Interest in the new instrument was immediate and Moog began making modular synthesizers for experimental composers and the academic community. Widespread awareness of Moog's name came when the synthesizer features in a number of commercially successful albums. Today, the Moog synthesizer has proven its indispensability through its widespread acceptance. Moog synthesizers are in use in hundreds of studios maintained by universities, recording companies, and private composers throughout the world. Dozens of successful recordings, film scores, and concert pieces have been realized on Moog synthesizers. The basic synthesizer concept as developed by R.A. Moog, Inc., as well as a large number of technological innovations, have literally revolutionized the contemporary musical scene, and have been instrumental in bringing electronic music into the mainstream of popular listening.

The Minimoog

In 1970, Robert Moog produced another ground breaking instrument, the Minimoog. Unlike previous synthesizers, the Minimoog abandoned the modular design in favor of all the electronics being built into a single keyboard unit. What was sacrificed in terms of modular flexibility was gained in ease of use and portability.

"In designing the Mini Moog, R. A. Moog engineers talked with hundreds of musicians to find out what they wanted in a performance synthesizer. Many prototypes were built over the past two years, and tried out by musicians in actual live-performance situations. Mini Moog circuitry is a combination of our time-proven and reliable designs with the latest developments in technology and electronic components.

The result is an instrument which is applicable to studio composition as much as to live performance, to elementary and high school music education as much as to university instruction, to the demands of commercial music as much as to the needs of the experimental avant garde. The Mini Moog offers a truly unique combination of versatility, playability, convenience, and reliability at an eminently reasonable price.”

The Synth Wars

After these initial efforts a number of other American designers and manufacturers followed the lead of Buchla and Moog. One of the most successful was the ARP SYNTHESIZER built by Tonus, Inc. with design innovations by the team of Dennis Colin and David Friend. The studio version of the ARP was introduced in 1970 and basically imitated modular features of the Moog and Buchla instruments.

A year later they introduced a smaller portable version which included a preset patching scheme that simplified the instrument’s function for the average pop-oriented performing musician. Other manufacturers included EML, makers of the ELECTRO-COMP, a small synthesizer oriented to the educational market; OBERHIEM, one of the earliest polyphonic synthesizers; muSonics’ SONIC V SYNTHESIZER; PAIA, makers of a synthesizer in kit form; Roland; Korg; and the highly sophisticated line of modular analog synthesizer systems designed and manufactured by Serge Tcherepnin and referred to as Serge Modular Music Systems.

In Europe the major manufacturer was undoubtedly EMS, a British company founded by its chief designer Peter Zinovieff. EMS built the Synthi 100, a large integrated system which introduced a matrix-pinboard patching system, and a small portable synthesizer based on similar design principles initially called the Putney but later modified into the SYNTHI A or Portabella. This later instrument became very popular with a number of composers who used it in live performance situations.

One of the more interesting footnotes to this history of the analog synthesizer is the rather problematic relationship that many of the designers have had with commercialization and the subsequent solution of manufacturing problems. While the commercial potential for these instruments became evident very early on in the 1960’s, the different aesthetic and design philosophies of the engineers demanded that they deal with this realization in different ways. Buchla, who early on got burnt by larger corporate interests, has dealt with the burden of marketing by essentially remaining a cottage industry, assembling and marketing his instruments from his home in Berkeley, California. In the case of MOOG, who as a fairly competent businessman grew a small business in his home into a distinctly commercial endeavor, even he ultimately left Moog Music in 1977, after the company had been acquired by two larger corporations, to pursue his own design interests.

Polyphonic synthesizers

Until the mid-1970s most synthesizers were monophonic – that is to say they were only capable of producing one note at once. A few exceptions, including Moog's Sonic Six and the ARP Odyssey, were duophonic (able to play two notes at once). True polyphonic instruments, able to play chords, appeared in 1975 in the form of the Polymoog, with the equally classic Yamaha C5-80 and the Oberheim Four-Voice being released the following year.

The advent of affordable microprocessor integrated circuits enabled manufacturers to bring the advantages of digital control and memory to the synthesizer. In 1978, Sequential Circuits introduced the Prophet 5, a fully programmable, polyphonic synthesizer with digital patch memory storage, which memorized all settings.

Analog Versus Digital

Analog refers to systems where a physical quantity is represented by an analogous physical quantity. The traditional audio recording chain demonstrates this quite well since each stage of translation throughout constitutes a physical system that is analogous to the previous one in the chain. The fluctuations of air molecules which constitute sound are translated into fluctuations of electrons by a microphone diaphram. These electrons are then converted via a bias current of a tape recorder into patterns of magnetic particles on a piece of tape. Upon playback the process can be reversed resulting in these fluctuations of electrons being amplified into fluctuations of a loudspeaker cone in space. The final displacement of air molecules results in an analogous representation of the original sounds that were recorded. Digital refers to systems where a physical quantity is represented through a counting process. In digital computers this counting process consists of a two-digit binary coding of electrical on-off switching states. In computer music the resultant digital code represents the various parameters of sound and its organization.


As early as 1954, the composer YANNIS XENAKIS had used a computer to aid in calculating the velocity trajectories of glissandi for his orchestral composition Metastasis. Since his background included a strong mathematical education, this was a natural development in keeping with his formal interest in combining mathematics and music. The search that had begun earlier in the century for new sounds and organizing principles that could be mathematically rationalized had become a dominant issue by the mid-1950’s. Serial composers like MILTON BABBIT had been dreaming of an appropriate machine to assist in complex compositional organization. While the RCA Music Synthesizer fulfilled much of this need for Babbitt, other composers desired even more machine-assisted control. LEJAREN HILLER, a former student of Babbitt, saw the compositional potential in the early generation of digital computers and generated the Illiac Suite for string quartet as a demonstration of this promise in 1956.

Xenakis continued to develop, in a much more sophisticated manner, his unique approach to computer-assisted instrumental composition. Between 1956 and 1962 he composed a number of works such as Morisma-Amorisma using the computer as a mathematical aid for finalizing calculations that were applied to instrumental scores. Xenakis stated that his use of probabilistic theories and the IBM 7090 computer enabled him to advance “...a form of composition which is not the object in itelf, but an idea in itself, that is to say, the beginnings of a family of compositions.”

The early vision of why computers should be applied to music was elegantly expressed by the scientist Heinz Von Foerster: “Accepting the possibilities of extensions in sounds and scales, how do we determine the new rules of synchronism and succession?

"It is at this point, where the complexity of the problem appears to get out of hand, that computers come to our assistance, not merely as ancillary tools but as essential components in the complex process of generating auditory signals that fulfill a variety of new principles of a generalized aesthetics and are not confined to conventional methods of sound generation by a given set of musical instruments or scales nor to a given set of rules of synchronism and succession based upon these very instruments and scales. The search for those new principles, algorithms, and values is, of course, in itself symbolic for our times.”

The actual use of the computer to generate sound first occurred at Bell Labs where Max Mathews used a primitive digital to analog converter to demonstrate this possibility in 1957. Mathews became the central figure at Bell Labs in the technical evolution of computer generated sound research and compositional programming with computer over the next decade. In 1961 he was joined by the composer JAMES TENNEY who had recently graduated from the University of Illinois where he had worked with Hiller and Gaburo to finish a major theoretical thesis entitled Meta —/ Hodos For Tenney, the Bell Lab residency was a significant opportunity to apply his advanced theoretical thinking (involving the application of theories from Gestalt Psychology to music and sound perception) into the compositional domain. From 1961 to 1964 he completed a series of works which include what are probably the first serious compositions using the MUSIC IV program of Max Mathews and Joan Miller and therefore the first serious compositions using computer-generated sounds: Noise Study, Four Stochastic Studies, Dialogue, Stochastic String Quartet, Ergodos I, Ergodos II, and PHASES.

In 1965 the research at Bell Labs resulted in the successful reproduction of an instrumental timbre: a trumpet waveform was recorded and then converted into a numerical representation and when converted back into analog form was deemed virtually indistinguisable from its source. This accomplishment by Mathews, Miller and the French composer JEAN-CLAUDE RISSET marks the beginning of the recapitulation of the traditional representationist versus modernist dialectic in the new context of digital computing. When contrasted against Tenney’s use of the computer to obtain entirely novel waveforms and structural complexities, the use of such immense technological resources to reproduce the sound of a trumpet, appeared to many composers to be a gigantic exercise in misplaced concreteness. When seen in the subsequent historical light of the recent breakthroughs of digital recording and sampling technologies that can be traced back to this initial experiment, the original computing expense certainly appears to have been vindicated. However, the dialectic of representationism and modernism has only become more problematic in the intervening years.

The development of computer music has from its inception been so critically linked to advances in hardware and software that its practitioners have, until recently, constituted a distinct class of specialized enthusiasts within the larger context of electronic music. The challenge that early computers and computing environments presented to creative musical work was immense. In retrospect, the task of learning to program and pit one’s musical intelligence against the machine constraints of those early days now takes on an almost heroic aire. In fact, the development of computer music composition is definitely linked to the evolution of greater interface transparency such that the task of composition could be freed up from the other arduous tasks associated with programming. The first stage in this evolution was the design of specific music-oriented programs such as MUSIC IV. The 1960’s saw gradual additions to these languages such as MUSIC IVB (a greatly expanded assembly language version by Godfrey Winham and Hubert S. Howe); MUSIC IVBF (a fortran version of MUSIC IVB); and MUSIC360 (a music program written for the IBM 360 computer by Barry Vercoe). The composer Charles Dodge wrote during this time about the intent of these music programs for sound synthesis:

“It is through simulating the operations of an ideal electronic music studio with an unlimited amount of equipment that a digital computer synthesizes sound. The first computer sound synthesis program that was truly generalpurpose (i.e., one that could, in theory, produce any sound) was created at the Bell Telephone Laboratories in the late 1950’s. A composer using such a program must typically provide: (1) Stored functions which will reside in the computer’s memory representing waveforms to be used by the unitgenerators of the program. (2) “Instruments” ofhisown designwhich logically interconnect these unit generators. (Unit generatorsare subprograms that simulate all the sound generation, modification, and storage devices of the ideal electronic music studio.) The computer “instruments” play the notes of the composition. (3) Notes may correspond to the familiar “pitch in time” or, alternatively, may represent some convenient way of dividing the time continuum.”

By the end of the 1960’s computer sound synthesis research saw a large number of new programs in operation at a variety of academic and private institutions. The demands of the medium however were still quite tedious and, regardless of the increased sophistication in control, remained a tape medium as its final product. Some composershad taken the initial steps towards using the computer for realtime performance by linking thepowerful control functions of the digital computer to the sound generators and modifiers of the analog synthesizer. We will deal with the specifics of this development in the next section. From its earliest days the use of the computer in music can be divided into two fairly distinct categories even though these categories have been blurred in some compositions: 1) those composers interested in using the computer predominantly as a compositional device to generate structural relationships that could not be imagined otherwise and 2) the use of the computer to generate new synthetic waveforms and timbres.

The Digital Synthesizer Devolution

By the end of the 1970’s most innovations in hardware design had been taken over by industry in response to the emerging needs of popular culture. The film and music “industries” became the major forces in establishing technical standards which impacted subsequent electronic music hardware design. While the industrial representationist agenda succeeded in the guise of popular culture, some pioneering creative work continued within the divergent contexts of academic tape studios and computer music research centers and in the non-institutional aesthetic research of individual composers. While specialized venues still exist where experimental work can be heard, it has been an increasing tendency that access to such work has gotten progressively more problematic.

One of the most important shifts to occur in the 1980’s was the progressive move toward the abandonment of analog electronics in favor of digital systems which could potentially recapitualate and summarize the prior history of electronic music in standardized forms. The Prophet 5 paved the way for all-digital synthesizer and in 1983 Yamaha introduced the world to FM synthesis in the form of the DX7 synthesizer in the form of the DX7 synthesizer. The DX7 also featured another development first seen in 1982 on the Sequential Circuits Prophet 600 – MIDI (Musical Instrument Digital Interface).By the mid-1980’s the industrial onslaught of highly redundant MIDI interfaceable digital synthesizers, processors, and samplers even began to displace the commercial merchandizing of traditional acoustic orchestral and band instruments. By 1990 the presence of these commercial technologies had become a ubiquitous cultural presence that largely defined the nature of the music being produced.


MIDI is an industry standard communication protocol that enables electronic musical instruments (and computer systems) to be connected to one another to exchange musical data – such as note values or program-change information. Prior to MIDI, different manufacturers each pursued their own proprietary standards. This mean that getting, for example, a Yamaha instrument to communicate with a Korg device was just about impossible. In 1981, Dave Smith of Sequential Circuits proposed the idea of a standard interface in a paper to the AES and the MIDI Specification 1.0 was published in 1983. The almost universal adoption of MIDI ensured that it became a key technology central to stage and studio, with applications beyond the purely musical, such as control of lights.


What began in the 20th century as a utopian and vaguely Romantic passion, namely that technology offered an opportunity to expand human perception and provide new avenues for the discovery of reality, subsequently evolved through the 1960’s into an intoxication with this humanistic agenda as a social critique and counter-cultural movement. The irony is that many of the artist’s who were most concerned with technology as a counter-cultural social critique built tools that ultimately became the resources for an industrial movement that in large part eradicated their ideological concerns. Most of these artists and their work have fallen into the annonymous cracks of a consumer culture that now regards their experimentation merely as inherited technical R & D. While the mass distribution of the electronic means of musical production appears to be an egalitarian success, as a worst case scenario it may also signify the suffocation of the modernist dream at the hands of industrial profiteering. To quote the philosopher Jacques Attali: “What is called music today is all too often only a disguise for the monologue of power. However, and this is the supreme irony of it all, never before have musicians tried so hard to communicate with their audience, and never before has that communication been so deceiving. Music now seems hardly more than a somewhat clumsy excuse for the self-glorification of musicians and the growth of a new industrial sector.”

From a slightly more optimistic perspective, the current dissolving of emphasis upon heroic individual artistic contributions, within the context of the current proliferation of musical technology, may signify the emergence of a new socio-political structure: the means to create transcends the created objects and the personality of the object’s creator. The mass dissemination of new tools and instruments either signifies the complete failure of the modernist agenda or it signifies the culminating expression of commoditization through mass production of the tools necessary to deconstruct the redundant loop of consumption. After decades of selling records as a replacement for the experience of creative action, the music industry now sells the tools which may facilitate that creative participation. We shift emphasis to the means of production instead of the production of consumer demand.

Whichever way the evolution of electronic music unfolds will depend upon the dynamical properties of a dialectical synthesis between industrial forces and the survival of the modernist belief in the necessity for technology as a humanistic potential. Whether the current users of these tools can resist the redundancy of industrial determined design biases, induced by the clichés of commercial market forces, depends upon the continuation of a belief in the necessity for alternative voices willing to articulate that which the status quo is unwillingly to hear.

Methods of synthesis

Although most synthesizers have many fundamental principles in common, there are, in fact, several different forms of synthesis technique. These include:
  1. Additive: in which pure sine tones are combined to create different timbres according to principles discovered by French mathematician Joseph Fourier.
  2. Subtractive: in which waveforms rich in harmonics, such as saw-tooth or square wave, produced by a VCO (voltage-controlled oscillator) are passed through filters that can strip away or accentuate certain harmonics. Most analog synthesizers employ subtractive techniques.
  3. FM (Frequency Modulation): devised in the early 1970s by John Chowning at Stanford University, FM was licensed to Yamaha for use in their DX instruments. FM involves the frequency of one waveform being used to modulate (modify or influence) the frequency of another, resulting in a new, much more complex sound.
  4. Granular: uses multiple layers of very short (1–50 milliseconds) waveforms called 'grains' to create 'clouds' of sound.
  5. Physical modeling: uses complex mathematical equations to simulate the physical characteristics of, for example, a plucked string or a struck drumskin. Due to the huge amount of processing involved, physical modeling has only been possible in real time with the development of extremely powerful processors. The first commercially available instrument to employ physical modeling was the Yamaha VL-1 in 1994.

Other important concepts

  1. Envelopes: these refer to how aspects of a sound behave over time. For example, in terms of volume, a cymbal crash has a fast attack and a long, slow decay. Typical synthesizer envelope controls give the user control over the attack, decay, sustain, and release (ADSR) portions of sound.
  2. Modulation: this simply means to modify or influence, and is a key concept in bringing expressiveness to synthesized sounds. Many different aspects of sound can be modulated. For example, the pitch of an oscillator can be modulated to produce a vibrato effect or the cut-off frequency of a filter can be modulated to create a characteristic sweeping sound. Modulation can be achieved by the player operating synthesizer controls such as the modulation wheel or by increasing pressure on an after-touch-sensitive keyboard.
  3. Effect: Most modern synthesizers allow the player to further modify the sound through the application of effects such as reverberation.


Synthesis (SYNTH) (from the ancient Greek "with" and "placing") refers to a combination of two or more entities that together form something new; alternately, it refers to the creating of something by artificial means. The corresponding verb, to synthesize (or synthesise), means to make or form a synthesis.


Wizard (WYZRD) n 1. a person who practises or professes to practise magic or sorcery 2. a person who is outstandingly clever in some specified field; expert 3. a wise man 4. computing a computer program that guides a user through a complex task adj 5. informal chiefly superb; outstanding 6. of or relating to a wizard or wizardry [ variant of wissard, from wise 1 + -ard ] 'wizardly adj wizard c.1440, "philosopher, sage," from M.E. wys "wise" (see wise (adj.)) ard. zynyste "magic," zynys "sorcerer," zyne "witch," all from zinoti "to know." The ground sense is perhaps "to know the future." "wise magical power!"

  • ControlVoltage

    SynthWizards Network Directory

  • SynthWizards Forum
  • MuffWiggler
  • Nihilist Records
  • Maniacs Only Forum
  • Doepfer MusikElectronic
  • MFBerlin
  • SynthaSystem
  • AliensProject
  • VintagePlanet
  • Electro-Music
  • Analogue Solutions
  • ElectricDruid
  • Music from Outer Space
  • KVR VST Programming
  • CK Modules
  • Birth of a Synth
  • DIY AudioFX
  • Console Sound Modular
  • FreeFrame Video Synthesis
  • Pure Data
  • PAIA
  • EML synth users
  • PAIA synth users
  • SequentialCircuits users
  • VintageSynthRepair
  • Muff Wiggler Ad Infinitum Control Percussa enclave Nova Musik Perfect Circuit Audio Electric Music Store


    Moog Music Ad Infinitum Innerclock Systems Bleep Labs Division 6 Electronic Music Works Tiptop Audio Mattson Mini Modular Thonk DIY Modular Audulus Synthrotek Malekko Laurentide Synthworks Evaton Technologies Animodule nw2s


    Modular Square Equinox OZ Perfect Circuit Audio Control Voltage SYNTHCUBE Big City Music