Hubbry Logo
Electronic musical instrumentElectronic musical instrumentMain
Open search
Electronic musical instrument
Community hub
Electronic musical instrument
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Electronic musical instrument
Electronic musical instrument
from Wikipedia

Robert Moog, inventor of the Moog synthesizer

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing "feel" and "response", offering a novel experience in playing relative to operating a mechanically linked piano keyboard.

All electronic musical instruments can be viewed as a subset of audio signal processing applications. Simple electronic musical instruments are sometimes called sound effects; the border between sound effects and actual musical instruments is often unclear.

In the 21st century, electronic musical instruments are now widely used in most styles of music. In popular music styles such as electronic dance music, almost all of the instrument sounds used in recordings are electronic instruments (e.g., bass synth, synthesizer, drum machine). Development of new electronic musical instruments, controllers, and synthesizers continues to be a highly active and interdisciplinary field of research. Specialized conferences, such as the International Conference on New Interfaces for Musical Expression, have organized to report cutting-edge work, as well as to provide a showcase for artists who perform or create music with new electronic music instruments, controllers, and synthesizers.

Classification

[edit]

In musicology, electronic musical instruments are known as electrophones. Electrophones are the fifth category of musical instrument under the Hornbostel-Sachs system. Musicologists typically only classify music as electrophones if the sound is initially produced by electricity, excluding electronically controlled acoustic instruments such as pipe organs and amplified instruments such as electric guitars.

The category was added to the Hornbostel-Sachs musical instrument classification system by Sachs in 1940, in his 1940 book The History of Musical Instruments;[1] the original 1914 version of the system did not include it. Sachs divided electrophones into three subcategories:

The last category included instruments such as theremins or synthesizers, which he called radioelectric instruments.

Francis William Galpin provided such a group in his own classification system, which is closer to Mahillon than Sachs-Hornbostel. For example, in Galpin's 1937 book A Textbook of European Musical Instruments, he lists electrophones with three second-level divisions for sound generation ("by oscillation", "electro-magnetic", and "electro-static"), as well as third-level and fourth-level categories based on the control method.[2]

Present-day ethnomusicologists, such as Margaret Kartomi[3] and Terry Ellingson,[4] suggest that, in keeping with the spirit of the original Hornbostel Sachs classification scheme, if one categorizes instruments by what first produces the initial sound in the instrument, that only subcategory 53 should remain in the electrophones category. Thus, it has been more recently proposed, for example, that the pipe organ (even if it uses electric key action to control solenoid valves) remain in the aerophones category, and that the electric guitar remain in the chordophones category, and so on.

Early examples

[edit]
Diagram of the clavecin électrique

In the 18th-century, musicians and composers adapted a number of acoustic instruments to exploit the novelty of electricity. Thus, in the broadest sense, the first electrified musical instrument was the Denis d'or keyboard, dating from 1753, followed shortly by the clavecin électrique by the Frenchman Jean-Baptiste de Laborde in 1761. The Denis d'or consisted of a keyboard instrument of over 700 strings, electrified temporarily to enhance sonic qualities. The clavecin électrique was a keyboard instrument with plectra (picks) activated electrically. However, neither instrument used electricity as a sound source.

The first electric synthesizer was invented in 1876 by Elisha Gray.[5][6] The "Musical Telegraph" was a chance by-product of his telephone technology when Gray discovered that he could control sound from a self-vibrating electromagnetic circuit and so invented a basic oscillator. The Musical Telegraph used steel reeds oscillated by electromagnets and transmitted over a telephone line. Gray also built a simple loudspeaker device into later models, which consisted of a diaphragm vibrating in a magnetic field.

A significant invention, which later had a profound effect on electronic music, was the audion in 1906. This was the first thermionic valve, or vacuum tube and which led to the generation and amplification of electrical signals, radio broadcasting, and electronic computation, among other things. Other early synthesizers included the Telharmonium (1897), the Theremin (1919), Jörg Mager's Spharophon (1924) and Partiturophone, Taubmann's similar Electronde (1933), Maurice Martenot's ondes Martenot ("Martenot waves", 1928), Trautwein's Trautonium (1930). The Mellertion (1933) used a non-standard scale, Bertrand's Dynaphone could produce octaves and perfect fifths, while the Emicon was an American, keyboard-controlled instrument constructed in 1930 and the German Hellertion combined four instruments to produce chords. Three Russian instruments also appeared, Oubouhof's Croix Sonore (1934), Ivor Darreg's microtonal 'Electronic Keyboard Oboe' (1937) and the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models of this latter were built and the only surviving example is currently stored at the Lomonosov University in Moscow. It has been used in many Russian movies—like Solaris—to produce unusual, "cosmic" sounds.[7][8]

Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s. In 1959 Daphne Oram produced a novel method of synthesis, her "Oramics" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC Radiophonic Workshop.[9] This workshop was also responsible for the theme to the TV series Doctor Who a piece, largely created by Delia Derbyshire, that more than any other ensured the popularity of electronic music in the UK.

Telharmonium

[edit]
Telharmonium console
by Thaddeus Cahill 1897

In 1897 Thaddeus Cahill patented an instrument called the Telharmonium (or Teleharmonium, also known as the Dynamaphone). Using tonewheels to generate musical sounds as electrical signals by additive synthesis, it was capable of producing any combination of notes and overtones, at any dynamic level. This technology was later used to design the Hammond organ. Between 1901 and 1910 Cahill had three progressively larger and more complex versions made, the first weighing seven tons, the last in excess of 200 tons. Portability was managed only by rail and with the use of thirty boxcars. By 1912, public interest had waned, and Cahill's enterprise was bankrupt.[10]

Theremin

[edit]
Theremin (1924)
Fingerboard Theremin

Another development, which aroused the interest of many composers, occurred in 1919–1920. In Leningrad, Leon Theremin built and demonstrated his Etherophone, which was later renamed the Theremin. This led to the first compositions for electronic instruments, as opposed to noisemakers and re-purposed machines. The Theremin was notable for being the first musical instrument played without touching it.[11] In 1929, Joseph Schillinger composed First Airphonic Suite for Theremin and Orchestra, premièred with the Cleveland Orchestra with Leon Theremin as soloist. The next year Henry Cowell commissioned Theremin to create the first electronic rhythm machine, called the Rhythmicon. Cowell wrote some compositions for it, which he and Schillinger premiered in 1932.

Ondes Martenot

[edit]
Ondes Martenot (c. 1974,
7th generation model)

The ondes Martenot is played with a keyboard or by moving a ring along a wire, creating "wavering" sounds similar to a theremin.[12] It was invented in 1928 by the French cellist Maurice Martenot, who was inspired by the accidental overlaps of tones between military radio oscillators, and wanted to create an instrument with the expressiveness of the cello.[12][13]

The French composer Olivier Messiaen used the ondes Martenot in pieces such as his 1949 symphony Turangalîla-Symphonie, and his sister-in-law Jeanne Loriod was a celebrated player.[14] It appears in numerous film and television soundtracks, particularly science fiction and horror films.[15] Contemporary users of the ondes Martenot include Tom Waits, Daft Punk and the Radiohead guitarist Jonny Greenwood.[16]

Trautonium

[edit]
Volks Trautonium (1933, Telefunken Ela T 42)

The Trautonium was invented in 1928. It was based on the subharmonic scale, and the resulting sounds were often used to emulate bell or gong sounds, as in the 1950s Bayreuth productions of Parsifal. In 1942, Richard Strauss used it for the bell- and gong-part in the Dresden première of his Japanese Festival Music. This new class of instruments, microtonal by nature, was only adopted slowly by composers at first, but by the early 1930s there was a burst of new works incorporating these and other electronic instruments.

Hammond organ and Novachord

[edit]
Hammond Novachord (1939)

In 1929 Laurens Hammond established his company for the manufacture of electronic instruments. He went on to produce the Hammond organ, which was based on the principles of the Telharmonium, along with other developments, including early reverberation units.[17] The Hammond organ is an electromechanical instrument, as it used both mechanical elements and electronic parts. A Hammond organ used spinning metal tonewheels to produce different sounds. A magnetic pickup similar in design to the pickups in an electric guitar is used to transmit the pitches in the tonewheels to an amplifier and speaker enclosure. While the Hammond organ was designed to be a lower-cost alternative to a pipe organ for church music, musicians soon discovered that the Hammond was an excellent instrument for blues and jazz; indeed, an entire genre of music developed built around this instrument, known as the organ trio (typically Hammond organ, drums, and a third instrument, either saxophone or guitar).

The first commercially manufactured synthesizer was the Novachord, built by the Hammond Organ Company from 1938 to 1942, which offered 72-note polyphony using 12 oscillators driving monostable-based divide-down circuits, basic envelope control and resonant low-pass filters. The instrument featured 163 vacuum tubes and weighed 500 pounds. The instrument's use of envelope control is significant, since this is perhaps the most significant distinction between the modern synthesizer and other electronic instruments.

Analogue synthesis 1950–1980

[edit]
Siemens Synthesizer at Siemens Studio For Electronic Music (ca.1959)
The RCA Mark II (ca.1957)

The most commonly used electronic instruments are synthesizers, so-called because they artificially generate sound using a variety of techniques. All early circuit-based synthesis involved the use of analogue circuitry, particularly voltage-controlled amplifiers, oscillators and filters. An important technological development was the invention of the Clavivox synthesizer in 1956 by Raymond Scott with subassembly by Robert Moog. French composer and engineer Edgard Varèse created a variety of compositions using electronic horns, whistles, and tape. Most notably, he wrote Poème électronique for the Philips pavilion at the Brussels World Fair in 1958.

Modular synthesizers

[edit]

RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City. Designed by Herbert Belar and Harry Olson at RCA, with contributions from Vladimir Ussachevsky and Peter Mauzey, it was installed at Columbia University in 1957. Consisting of a room-sized array of interconnected sound synthesis components, it was only capable of producing music by programming,[6] using a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano but capable of generating a wide variety of sounds. The vacuum tube system had to be patched to create timbres.

Robert Moog

In the 1960s, synthesizers were still usually confined to studios due to their size. They were usually modular in design, their stand-alone signal sources and processors connected with patch cords or by other means and controlled by a common controlling device. Harald Bode, Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel.[18] Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a synthesizer that could reasonably be used by musicians, designing the circuits while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964.[19] It required experience to set up sounds, but was smaller and more intuitive than what had come before, less like a machine and more like a musical instrument. Moog established standards for control interfacing, using a logarithmic 1-volt-per-octave for pitch control and a separate triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control was usually performed either with an organ-style keyboard or a music sequencer producing a timed series of control voltages. During the late 1960s, hundreds of popular recordings used Moog synthesizers. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS.

Minimoog (1970, R.A.Moog)

Integrated synthesizers

[edit]

In 1970, Moog designed the Minimoog, a non-modular synthesizer with a built-in keyboard. The analogue circuits were interconnected with switches in a simplified arrangement called "normalization." Though less flexible than a modular design, normalization made the instrument more portable and easier to use. The Minimoog sold 12,000 units.[20] Further standardized the design of subsequent synthesizers with its integrated keyboard, pitch and modulation wheels and VCO->VCF->VCA signal flow. It has become celebrated for its "fat" sound—and its tuning problems. Miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments that soon appeared in live performance and quickly became widely used in popular music and electronic art music.[21]

Yamaha GX-1 (ca.1973)
Sequential Circuits Prophet-5 (1977)

Polyphony

[edit]

Many early analog synthesizers were monophonic, producing only one tone at a time. Popular monophonic synthesizers include the Moog Minimoog. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, could produce two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords) was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3.

By 1976, affordable polyphonic synthesizers began to appear, such as the Yamaha CS-50, CS-60 and CS-80, the Sequential Circuits Prophet-5 and the Oberheim Four-Voice. These remained complex, heavy and relatively costly. The recording of settings in digital memory allowed storage and recall of sounds. The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977.[22] For the first time, musicians had a practical polyphonic synthesizer that could save all knob settings in computer memory and recall them at the touch of a button. The Prophet-5's design paradigm became a new standard, slowly pushing out more complex and recondite modular designs.

Tape recording

[edit]
Phonogene (1953)
for musique concrète
Mellotron MkVI[23][24][25]

In 1935, another significant development was made in Germany. Allgemeine Elektricitäts Gesellschaft (AEG) demonstrated the first commercially produced magnetic tape recorder, called the Magnetophon. Audio tape, which had the advantage of being fairly light as well as having good audio fidelity, ultimately replaced the bulkier wire recorders.

The term "electronic music" (which first came into use during the 1930s) came to include the tape recorder as an essential element: "electronically produced sounds recorded on tape and arranged by the composer to form a musical composition".[26] It was also indispensable to Musique concrète.

Tape also gave rise to the first, analogue, sample-playback keyboards, the Chamberlin and its more famous successor the Mellotron, an electro-mechanical, polyphonic keyboard originally developed and built in Birmingham, England in the early 1960s.

Sound sequencer

[edit]
One of the earliest digital sequencers, EMS Synthi Sequencer 256 (1971)

During the 1940s–1960s, Raymond Scott, an American composer of electronic music, invented various kinds of music sequencers for his electric compositions. Step sequencers played rigid patterns of notes using a grid of (usually) 16 buttons, or steps, each step being 1/16 of a measure. These patterns of notes were then chained together to form longer compositions. Software sequencers were continuously utilized since the 1950s in the context of computer music, including computer-played music (software sequencer), computer-composed music (music synthesis), and computer sound generation (sound synthesis).

Digital era 1980–2000

[edit]

Digital synthesis

[edit]
Synclavier I (1977)
Synclavier PSMT (1984)
Yamaha GS-1 (1980)
Yamaha DX7 (1983) and Yamaha VL-1 (1994)

The first digital synthesizers were academic experiments in sound synthesis using digital computers. FM synthesis was developed for this purpose, as a way of generating complex sounds digitally with the smallest number of computational operations per sound sample. In 1983, Yamaha introduced the first stand-alone digital synthesizer, the DX-7. It used frequency modulation synthesis (FM synthesis), first developed by John Chowning at Stanford University during the late sixties.[27] Chowning exclusively licensed his FM synthesis patent to Yamaha in 1975.[28] Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. There followed a pair of smaller, preset versions, the CE20 and CE25 Combo Ensembles, targeted primarily at the home organ market and featuring four-octave keyboards.[29] Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass-market all-digital synthesizer.[30] It became indispensable to many music artists of the 1980s, and demand soon exceeded supply.[31] The DX7 sold over 200,000 units within three years.[32]

The DX series was not easy to program but offered a detailed, percussive sound that led to the demise of the electro-mechanical Rhodes piano, which was heavier and larger than a DX synth. Following the success of FM synthesis, Yamaha signed a contract with Stanford University in 1989 to develop digital waveguide synthesis, leading to the first commercial physical modeling synthesizer, Yamaha's VL-1, in 1994.[33] The DX-7 was affordable enough for amateurs and young bands to buy, unlike the costly synthesizers of previous generations, which were mainly used by top professionals.

Sampling

[edit]
A Fairlight CMI keyboard (1979)
Kurzweil K250 (1984)

The Fairlight CMI (Computer Musical Instrument), the first polyphonic digital sampler, was the harbinger of sample-based synthesizers.[34] Designed in 1978 by Peter Vogel and Kim Ryrie and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia, the Fairlight CMI gave musicians the ability to modify volume, attack, decay, and use special effects like vibrato. Sample waveforms could be displayed on-screen and modified using a light pen.[35] The Synclavier from New England Digital was a similar system.[36] Jon Appleton (with Jones and Alonso) invented the Dartmouth Digital Synthesizer, later to become the New England Digital Corp's Synclavier. The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer,[37] noted for its ability to reproduce several instruments synchronously and having a velocity-sensitive keyboard.[38]

Computer music

[edit]
ISPW, a successor of 4X, was a DSP platform based on i860 and NeXT, by IRCAM.

An important new development was the advent of computers for the purpose of composing music, as opposed to manipulating or creating sounds. Iannis Xenakis began what is called musique stochastique, or stochastic music, which is a method of composing that employs mathematical probability systems. Different probability algorithms were used to create a piece under a set of parameters. Xenakis used graph paper and a ruler to aid in calculating the velocity trajectories of glissando for his orchestral composition Metastasis (1953–54), but later turned to the use of computers to compose pieces like ST/4 for string quartet and ST/48 for orchestra (both 1962).

The impact of computers continued in 1956. Lejaren Hiller and Leonard Issacson composed Illiac Suite for string quartet, the first complete work of computer-assisted composition using algorithmic composition.[39]

In 1957, Max Mathews at Bell Lab wrote MUSIC-N series, a first computer program family for generating digital audio waveforms through direct synthesis. Then Barry Vercoe wrote MUSIC 11 based on MUSIC IV-BF, a next-generation music synthesis program (later evolving into csound, which is still widely used).

In mid 80s, Miller Puckette at IRCAM developed graphic signal-processing software for 4X called Max (after Max Mathews), and later ported it to Macintosh (with Dave Zicarelli extending it for Opcode[40]) for real-time MIDI control, bringing algorithmic composition availability to most composers with modest computer programming background.

MIDI

[edit]
MIDI enables connections between digital musical instruments

In 1980, a group of musicians and music merchants met to standardize an interface by which new instruments could communicate control instructions with other instruments and the prevalent microcomputer. This standard was dubbed MIDI (Musical Instrument Digital Interface). A paper was authored by Dave Smith of Sequential Circuits and proposed to the Audio Engineering Society in 1981. Then, in August 1983, the MIDI Specification 1.0 was finalized.

The advent of MIDI technology allows a single keystroke, control wheel motion, pedal movement, or command from a microcomputer to activate every device in the studio remotely and in synchrony, with each device responding according to conditions predetermined by the composer.

MIDI instruments and software made powerful control of sophisticated instruments easily affordable by many studios and individuals. Acoustic sounds became reintegrated into studios via sampling and sampled-ROM-based instruments.

Modern electronic musical instruments

[edit]
Wind synthesizer
SynthAxe

The increasing power and decreasing cost of sound-generating electronics (and especially of the personal computer), combined with the standardization of the MIDI and Open Sound Control musical performance description languages, has facilitated the separation of musical instruments into music controllers and music synthesizers.

By far the most common musical controller is the musical keyboard. Other controllers include the radiodrum, Akai's EWI and Yamaha's WX wind controllers, the guitar-like SynthAxe, the BodySynth,[41] the Buchla Thunder, the Continuum Fingerboard, the Roland Octapad, various isomorphic keyboards including the Thummer, and Kaossilator Pro, and kits like I-CubeX.

Reactable

[edit]
Reactable

The Reactable is a round translucent table with a backlit interactive display. By placing and manipulating blocks called tangibles on the table surface, while interacting with the visual display via finger gestures, a virtual modular synthesizer is operated, creating music or sound effects.

Percussa AudioCubes

[edit]
Audiocubes

AudioCubes are autonomous wireless cubes powered by an internal computer system and rechargeable battery. They have internal RGB lighting, and are capable of detecting each other's location, orientation and distance. The cubes can also detect distances to the user's hands and fingers. Through interaction with the cubes, a variety of music and sound software can be operated. AudioCubes have applications in sound design, music production, DJing and live performance.

Kaossilator

[edit]
Korg Kaossilator

The Kaossilator and Kaossilator Pro are compact instruments where the position of a finger on the touch pad controls two note characteristics; usually the pitch is changed with a left-right motion and the tonal property, filter or other parameter changes with an up-down motion. The touch pad can be set to different musical scales and keys. The instrument can record a repeating loop of adjustable length, set to any tempo, and new loops of sound can be layered on top of existing ones. This lends itself to electronic dance music but is more limited for controlled sequences of notes, as the pad on a regular Kaossilator is featureless.

Eigenharp

[edit]

The Eigenharp is a large instrument resembling a bassoon, which can be interacted with through big buttons, a drum sequencer and a mouthpiece. The sound processing is done on a separate computer.

AlphaSphere

[edit]

The AlphaSphere is a spherical instrument that consists of 48 tactile pads that respond to pressure as well as touch. Custom software allows the pads to be indefinitely programmed individually or by groups in terms of function, note, and pressure parameter among many other settings. The primary concept of the AlphaSphere is to increase the level of expression available to electronic musicians by allowing for the playing style of a musical instrument.

Chip music

[edit]

Chiptune, chipmusic, or chip music is music written in sound formats where many of the sound textures are synthesized or sequenced in real time by a computer or video game console sound chip, sometimes including sample-based synthesis and low-bit sample playback. Many chip music devices featured synthesizers in tandem with low-rate sample playback.

DIY culture

[edit]

During the late 1970s and early 1980s, do-it-yourself designs were published in hobby electronics magazines (such the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK.

Circuit bending

[edit]
Probing for "good bends" using a jeweler's screwdriver and alligator clips

In 1966, Reed Ghazala discovered and began to teach math "circuit bending"—the application of the creative short circuit, a process of chance short-circuiting, creating experimental electronic instruments, exploring sonic elements mainly of timbre and with less regard to pitch or rhythm, and influenced by John Cage’s aleatoric music concept.[42]

Much of this manipulation of circuits directly, especially to the point of destruction, was pioneered by Louis and Bebe Barron in the early 1950s, such as their work with John Cage on the Williams Mix and especially in the soundtrack to Forbidden Planet.

Modern circuit bending is the creative customization of the circuits within electronic devices such as low voltage, battery-powered guitar effects, children's toys and small digital synthesizers to create new musical or visual instruments and sound generators. Emphasizing spontaneity and randomness, the techniques of circuit bending have been commonly associated with noise music, though many more conventional contemporary musicians and musical groups have been known to experiment with bent instruments. Circuit bending usually involves dismantling the machine and adding components such as switches and potentiometers that alter the circuit. With the revived interest in analogue synthesizers, circuit bending became a cheap solution for many experimental musicians to create their own individual analogue sound generators. Nowadays, many schematics can be found to build noise generators such as the Atari Punk Console or the Dub Siren as well as simple modifications for children's toys such as the Speak & Spell that are often modified by circuit benders.

Modular synthesizers

[edit]

The modular synthesizer is a type of synthesizer consisting of separate interchangeable modules. These are also available as kits for hobbyist DIY constructors. Many hobbyist designers also make available bare PCB boards and front panels for sale to other hobbyists.

See also

[edit]

Instrument families

Individual instruments (historical)

Individual instruments (modern)

In Indian and Asian traditional music

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
An electronic musical instrument is a device that generates sounds through electronic means, typically using oscillators, circuits, and electronic components such as vacuum tubes or transistors to produce and manipulate audio signals, distinct from electro-mechanical or acoustic instruments that rely on physical or amplification of . These instruments emerged in the late and have since revolutionized music creation, performance, and production across genres including classical, rock, pop, and . The history of electronic musical instruments began with early innovations like Thaddeus Cahill's telharmonium, patented in 1897 and operational by 1906, which used electrical generators to produce tones distributed over telephone lines, marking the first large-scale electrically generated sound instrument. Key developments in the early included the invention of the in 1920 by Léon Theremin, the first instrument played without physical contact using hand gestures near antennas to control pitch and volume via electronic oscillators. Post-World War II advancements featured tape recorders for sound manipulation in musique concrète and the rise of analog synthesizers, with Robert Moog's introduced commercially in 1967, enabling musicians to generate and shape complex waveforms through voltage-controlled components. By the 1970s and 1980s, electronic instruments evolved with portable models like the (1970) and the introduction of digital technologies, including the Musical Instrument Digital Interface (MIDI) standard in 1983, which allowed seamless communication between synthesizers, computers, and drum machines such as the (1980). Today, electronic musical instruments encompass a wide array, from hardware synthesizers and samplers to software-based virtual instruments and controllers, influencing virtually all modern music production through affordable digital audio workstations (DAWs) and enabling real-time performance in live settings.

Definition and Classification

Core Principles

Electronic musical instruments generate sound by producing and manipulating electrical signals that are ultimately converted into audible vibrations through transducers such as speakers or , in contrast to acoustic instruments, which rely on mechanical vibrations of physical materials like strings, membranes, or air columns to displace air molecules and create sound waves. This electronic approach allows for precise control over parameters like pitch, , and without the physical constraints of traditional instruments, enabling the synthesis of sounds that may not occur naturally. At the core of electronic sound generation are key components that process electrical signals to mimic or create musical tones. Waveforms form the basis of these signals, with common types including the , defined by the equation y(t)=Asin(2πft+ϕ)y(t) = A \sin(2\pi f t + \phi), where AA is the , ff is the , tt is time, and ϕ\phi is the phase; square waves, which feature abrupt transitions and odd harmonics; and sawtooth waves, characterized by a linear ramp with both even and odd harmonics. Voltage-controlled oscillators (VCOs) produce these waveforms at frequencies typically ranging from 20 Hz to 20 kHz, with their output frequency modulated by an input voltage according to standards like 1V/ scaling. Envelope generators further shape the sound's dynamics using the ADSR model—Attack (initial rise), Decay (peak to sustain drop), Sustain (held level), and Release (fade after note end)—to control or other parameters over time, often with time ranges spanning milliseconds to seconds. Filters, such as low-pass types, and amplifiers complete the manipulation, attenuating specific frequencies to shape and boosting signal strength while preventing . The basic signal flow in electronic instruments follows a modular path: an oscillator generates the initial waveform, which passes through filters for spectral shaping, then to an amplifier modulated by an envelope generator for amplitude control, before reaching the output transducer that converts the electrical signal into acoustic sound. This chain enables real-time synthesis and processing, with control voltages or digital signals directing the flow. The roots of these principles trace to 19th-century experiments integrating electricity with sound, such as Samuel Thomas Soemmerring's 1809 demonstration of triggering tuned bells via telegraph wires, Elisha Gray's 1874 musical telegraph using vibrating reeds to transmit tones over wires, and Thaddeus Cahill's late-1890s Telharmonium, which employed dynamos and tone wheels to generate electromagnetic tones broadcast through telephone lines. These innovations built on electromagnetic induction principles, as explored in Hermann von Helmholtz's 1863 work on tone sensation, laying the groundwork for modern electronic music technology.

Categories of Electronic Instruments

Electronic musical instruments are organized within taxonomic frameworks that extend classical systems to account for their electronic and digital characteristics. The foundational Hornbostel-Sachs classification, established in 1914, categorized instruments based on sound production mechanisms into idiophones, membranophones, chordophones, and aerophones; Curt Sachs expanded this in 1940 by adding electrophones as a fifth category for devices generating or amplifying sound electrically. Subsequent revisions, such as those by the Musical Instrument Museums Online (MIMO) consortium in 2011, incorporate digital elements and non-Western instruments, emphasizing modular schemes to handle the dynamic nature of modern electrophones. These evolutions reflect a shift from rigid hierarchies to more flexible, heterarchical approaches that consider technological integration and performative contexts. Primary categories delineate instruments by core functions: synthesizers generate original sounds through electronic oscillators and ; samplers reproduce and manipulate pre-recorded audio samples; controllers act as input interfaces, transmitting performance data via protocols like without producing audio; and hybrids merge these elements, such as devices combining synthesis with sample playback. Sub-classifications further refine synthesizers by —monophonic models play a single note at a time, ideal for leads or basses, while polyphonic ones support multiple simultaneous notes for chords and textures—and by synthesis paradigms, including additive methods that construct waveforms by summing sine waves to build harmonics, and subtractive approaches that filter rich oscillator outputs like sawtooth waves to sculpt tones. Instruments are also grouped by role in musical practice: performance-oriented devices, such as keyboard synthesizers and wind controllers, facilitate real-time expression in live settings; studio tools, including drum machines and sequencers, enable programmed rhythms and arrangements for recording and production; and experimental instruments, like touchless controllers reminiscent of the , explore novel gesture-based interactions beyond traditional interfaces. Category overlaps are common, as evidenced by keyboard synthesizers that integrate sampling engines with synthesis modules, allowing seamless transitions between recorded and generated sounds in a single device.

Historical Evolution

Pioneering Instruments (1890s–1940s)

The development of electronic musical instruments in the late 19th and early 20th centuries stemmed from experiments with electrical generators, particularly dynamos, which produced audible tones when operated at specific speeds. In the , inventors like Thaddeus Cahill explored these dynamo-generated sounds, laying the groundwork for electromechanical sound production by recognizing that rotating electromagnetic components could mimic musical pitches. The , patented by American inventor Thaddeus Cahill in 1897, represented the first major electromechanical musical instrument. This massive device, weighing up to 200 tons in its largest iteration, used tonewheels—rotating disks driven by dynamos—to generate sinusoidal waveforms for various pitches, with combining these tones to produce organ-like sounds. Cahill's design allowed for public performances transmitted over lines to restaurants and halls in starting in 1906, marking an early experiment in musical broadcasting despite technical challenges like electrical interference. In 1920, Russian physicist Léon invented the , one of the earliest fully electronic instruments relying on vacuum tubes rather than mechanical generators. The device employed two high-frequency oscillators operating on the principle of heterodyning, where the beat frequency between them produced audible pitches; performers controlled pitch and volume by moving their hands near antennas, altering without physical contact. demonstrated the instrument across and the , influencing composers like and sparking interest in gesture-based control. The , patented by French radio operator Maurice Martenot in , introduced a novel hybrid control system combining a traditional keyboard with a ring attached to a wire for continuous pitch gliding and . This vacuum-tube instrument generated tones via a heterodyne circuit similar to the but added a pull-ring mechanism for expressive glissandi and a set of intensity keys for variation, enabling it to emulate and wind instruments. Martenot's design quickly gained traction in , with early adopters including his sister Ginette Martenot, who performed it in orchestral settings. German engineer Friedrich Trautwein developed the in 1930 at the Musikhochschule, pioneering subharmonic synthesis where oscillators produced frequencies below a fundamental tone to create rich, non-harmonic timbres. The instrument featured a resistive wire stretched over a metal strip as a ribbon controller, pressed by the finger to select pitch, alongside a foot pedal for volume and optional "aftertouch" strings for subharmonic triggering. refined later versions, such as the 1952 Mixtur-Trautonium, which added filters, but the original emphasized monophonic expression suited to experimental compositions by figures like . The , introduced by American inventor Laurens Hammond in 1935, advanced technology into a compact, polyphonic instrument suitable for churches and theaters. It employed 91 small rotating disks driven by a to generate sine waves, which were shaped through drawbars to mimic stops, with vacuum-tube amplification for output. Hammond's design emphasized reliability and affordability, producing over 1,000 units annually by the late 1930s. Building on Hammond's innovations, the debuted in 1939 as the first commercial polyphonic , utilizing 163 vacuum tubes in a divide-down oscillator chain to enable all 72 notes of its keyboard to sound simultaneously. Designed by Hammond engineers John Hanert and C.N. Williams, it featured nine selectable vacuum-tube circuits per key for shaping and a system, producing ethereal, string-like timbres through built-in speakers. Only about 1,000 units were made before production halted in due to wartime restrictions. Key demonstrations at the highlighted these instruments' potential, with the featured in Ferde Grofé's "New World Ensemble" performances, showcasing polyphonic electronic to large audiences and underscoring the shift toward integrated amplification in music.

Analog Synthesis Era (1950s–1970s)

The Analog Synthesis Era marked a pivotal shift in electronic music , driven by the development of voltage-controlled modular synthesizers that allowed musicians and composers to generate and manipulate sounds through interconnected electronic modules. Building on earlier vacuum-tube experiments, this period saw the transition to more flexible, real-time synthesis systems using transistors and integrated circuits. Key innovations included voltage-controlled oscillators (VCOs), amplifiers (VCAs), and filters (VCFs), enabling dynamic control over pitch, , and via patch cables that routed signals between modules. These systems emphasized subtractive synthesis, where complex waveforms were shaped into musical tones, fostering experimental compositions in studios and performances alike. Pioneering modular synthesizers emerged in the late and , with the RCA Mark II Sound Synthesizer (1957) representing an early milestone. Designed by engineers Harry F. Olson and Herbert Belar at RCA Laboratories, it featured a room-sized array of vacuum-tube oscillators, filters, and envelope generators controlled via punched paper tape for precise sequencing of frequencies, volumes, and timbres across a 10-octave range. Installed at Columbia University's Electronic Music Center, the RCA Mark II influenced subsequent designs by demonstrating modular signal routing, though its reliance on pre-programmed inputs limited real-time playability. The Moog Modular system, introduced in 1964 by inventor , revolutionized accessibility with its voltage-controlled architecture. Comprising customizable modules connected by patch cables, it included VCOs for generating sawtooth, square, and triangle waves; VCAs for ; and VCFs—particularly the resonant ladder filter—for timbral shaping. This setup allowed performers to create evolving sounds interactively, departing from rigid tape-based methods and inspiring a generation of electronic musicians. Similarly, Don Buchla's 100 Series, developed around 1963 and refined by 1966, offered a modular alternative focused on experimental sound design. Commissioned for the San Francisco Tape Music Center, it eschewed traditional keyboards in favor of touch-sensitive plates and ribbon controllers, with modules for complex waveshaping and signal processing. Buchla's emphasis on non-traditional interfaces complemented Moog's keyboard-centric approach, broadening the palette for avant-garde composition. By the early 1970s, integrated synthesizers miniaturized these concepts into portable, performance-oriented instruments. The , released in 1970 by R.A. Moog Co., was the first commercially successful portable , featuring three VCOs, a multimode VCF, and a VCA in a fixed architecture with a 44-note keyboard. Its monophonic design and immediate response made it a staple for rock and artists, producing iconic leads and basses through subtractive synthesis. The , launched in 1972 by , followed as a compact competitor, incorporating dual VCOs, a versatile filter (initially 12dB/octave), and a pressure-sensitive keyboard for expressive control. Advancements in addressed the monophonic limitations of early models, enabling multiple simultaneous notes. The Sequential Circuits , introduced in 1978, was the first fully programmable polyphonic , with five voices each featuring two VCOs, a VCF, and a VCA, plus microprocessor-based patch memory for instant recall. This voice allocation system—dividing resources among notes—overcame prior constraints, allowing chordal playing and layered textures that expanded analog synthesis into mainstream production. Key figures like amplified the era's cultural impact through innovative recordings. Her 1968 album , performed entirely on a custom Moog Modular, reinterpreted Bach's works with meticulous synthesizer phrasing, achieving over one million sales and topping classical charts from 1969 to 1972. This Grammy-winning release demonstrated the instrument's expressive potential, shifting public perception from novelty to serious musical tool and boosting demand for analog systems. Integration with analog tape recording drew from techniques, where physical manipulation created rhythmic and textural effects. Pioneered by at the Groupe de Recherches Musicales (GRM) in the early , hardware like multi-track tape recorders and loop mechanisms—such as rotating disks with magnetic heads picking up looped segments—enabled speed variations, splicing, and feedback for composing with recorded sounds. By the , synthesizers interfaced with these setups, allowing electronic tones to be layered and edited on tape for concrète-inspired works. Early analog step sequencers further enhanced pattern-based composition, automating repetitive sequences. Moog introduced sequencer modules in the mid-1960s, featuring rows of knobs and switches to step through voltage patterns for controlling VCO pitch and other parameters, often synced to clocks for hypnotic loops. These tools, integral to modular systems, facilitated the grooves of bands like , bridging studio experimentation with live performance.

Digital Transition (1980s–1990s)

The digital transition in electronic musical instruments during the and 1990s marked a pivotal shift from analog circuitry to computational sound generation, enabling greater precision, , and programmability through microprocessors and . This era began with innovations in sampling, where devices captured and stored acoustic waveforms in RAM for playback and manipulation, revolutionizing by allowing musicians to replicate real-world instruments or create novel timbres. The , introduced in 1979, pioneered this approach as the first commercial digital sampler and synthesizer, featuring 8-bit sampling at 24 kHz, graphic waveform editing via , and up to 16 voices of stored on . Following closely, the in 1981 made sampling more accessible with 8-bit resolution at 27.4 kHz, 8-voice , and storage for up to 2 seconds of mono samples per voice, enabling pitch-shifting, looping, and envelope shaping without analog instability. These tools laid the groundwork for genres like hip-hop, where producers manipulated breakbeats and vocal snippets, contributing to the rise of sampling culture in the . Digital synthesis emerged as a complementary advancement, with frequency modulation (FM) techniques offering efficient algorithmic sound creation via phase modulation. Yamaha's DX7, released in 1983, popularized FM synthesis through its 6-operator architecture, generating complex harmonics from simple sine waves using phase modulation, mathematically expressed as y(n)=Asin(ωn+Isin(ωmn))y(n) = A \sin(\omega n + I \sin(\omega_m n)), where AA is amplitude, ω\omega the carrier frequency, II the modulation index, and ωm\omega_m the modulator frequency; this allowed 32-voice polyphony and 128 preset patches stored in ROM. The protocol's impact was amplified by the introduction of MIDI in 1983, a standardized serial interface developed by Sequential Circuits, Roland, and others, specifying messages like Note On (status 0x90, note number 0-127, velocity 1-127), Note Off (status 0x80, note number, release velocity 0-127), and continuous controllers for real-time parameter control, fostering interoperability among synthesizers, sequencers, and computers. Demonstrated publicly at the 1983 NAMM show with a Sequential Circuits Prophet-600 controlling a Roland Jupiter-6, MIDI enabled synchronized performances and multi-timbral setups, transforming studio workflows. Key instruments bridged synthesis and sampling, such as the (1980), which used a digital sequencer for 32-step patterns and analog synthesis for percussion voices like its signature deep , influencing early tracks through programmable rhythms exported via sync ports. The Akai MPC-60 (1988), dubbed the "MIDI Production Center," integrated 12-bit sampling at 40 kHz (up to 13 seconds standard, expandable), a 16-velocity-sensitive pad grid, and a 16-track sequencer, allowing real-time chopping, time-stretching, and sequencing for beat-making. Computer integration accelerated this shift, with software like Csound (1986), developed by Barry Vercoe at MIT, providing a unit-generator-based language for and synthesis on platforms including the and Atari ST, which featured built-in ports for direct hardware control. These developments fueled the explosion of in and hip-hop in New York during the late 1980s and 1990s, where digital tools enabled repetitive, machine-like grooves and sample-based innovation.

Contemporary Innovations (2000s–Present)

The 2000s marked a shift toward software-based electronic instruments, with (VST) plugins enabling complex synthesis within digital audio workstations (DAWs). ' Massive, released in 2006, exemplified this trend as a wavetable renowned for its versatile capabilities, particularly in generating aggressive bass and lead sounds for electronic genres. , launched in 2001, further transformed production workflows by introducing a non-linear session view that facilitated real-time looping and arrangement, making it essential for live electronic performances and studio composition. These tools democratized access to professional-grade synthesis, allowing musicians to emulate and expand upon analog hardware without physical constraints. Hardware innovations in the and emphasized intuitive, gestural interfaces to enhance expressivity. The Reactable, developed in 2007 by a team at , introduced a collaborative system where users manipulate physical blocks to control synthesis and effects in real-time, fostering visual and tactile creation. Evolutions of Korg's Kaoss Pad series, such as the KP3 released in 2006, built on the original 1999 model's XY touchpad by adding sampling and looping features, enabling dynamic effect manipulation for DJs and performers. The Eigenharp Alpha, launched in 2009, offered breath control and keywave strips for nuanced polyphonic expression, bridging techniques with digital synthesis. Similarly, the AlphaSphere, prototyped around 2012 and shipped in 2013, featured a spherical form with pressure-sensitive pads for multidimensional control, promoting immersive tactile interaction. Percussa's AudioCubes, available from 2009, utilized wireless, light-emitting cubes to enable spatial audio manipulation and gestural performance in three dimensions. Advancements in (AI) have integrated smart features into electronic instruments, generating adaptive sounds and assisting composition. Google's Magenta project, initiated in 2016, employs models to create music in real-time, influencing AI-driven synthesizers that suggest melodies or harmonize inputs based on user patterns. Endel, launched in 2018, uses AI algorithms to produce personalized ambient soundscapes that respond to biometric like , functioning as an adaptive generator for wellness and creative applications. In the 2020s, haptic feedback controllers have emerged to provide tactile responses, simulating instrument feel through vibrations; examples include vibrotactile systems in wearables that synchronize pulses with audio cues for enhanced performer immersion. Recent trends from 2020 to 2025 highlight increased connectivity and modularity, driven by portable, wireless designs. Bluetooth-enabled controllers like Novation's Launchkey series received significant updates in 2023–2024, incorporating deeper DAW integration and scale modes for seamless mobile production. The modular experienced a surge in popularity during the early , with manufacturers introducing compact, customizable modules that appeal to experimental musicians seeking analog-digital hybrids. Virtual and (VR/AR) instruments, such as Soundbrenner's haptic wearables updated in 2022, deliver rhythm via vibrations, aiding silent practice and ensemble synchronization without auditory disruption. The global market for electronic musical instruments has grown steadily, projected to reach over $1 billion by 2035, fueled by advancements in portability, AI integration, and connectivity that lower barriers for creators worldwide.

Synthesis and Sound Generation Techniques

Analog Methods

Analog methods in electronic musical instruments rely on continuous electrical signals processed through hardware circuits to generate and shape s, emphasizing techniques like subtractive and that dominated early designs. Subtractive synthesis begins with oscillators producing waveforms rich in harmonics, such as the , which contains a and all its integer multiples, creating a bright, buzzy . These harmonics are then shaped by filters, typically low-pass types, that attenuate higher frequencies to subtract unwanted overtones and sculpt the desired sound character. In analog implementations, voltage-controlled filters (VCFs) adjust the to control which harmonics pass through, often with to boost frequencies near the for added emphasis. Additive synthesis constructs complex tones by summing multiple sine waves, each representing a partial, according to the , which decomposes any periodic into a fundamental and its harmonics. In analog systems, this requires banks of oscillators tuned to harmonic frequencies, with their amplitudes modulated via voltage-controlled amplifiers (VCAs) to build timbres from simple sine components. For instance, a sawtooth-like wave can be approximated by adding sine waves with amplitudes inversely proportional to their (1/n). Other analog techniques include and , which multiply two signals to produce sum and difference frequencies, yielding metallic or bell-like sounds without the original carrier tone. , implemented via rings or multipliers in hardware, creates inharmonic spectra ideal for dissonant effects, while varies the carrier's volume with the modulator for subtler timbral shifts. Hardware in analog filters, such as the iconic Moog ladder filter, employs capacitors and resistors in a multi-stage RC network to define frequency response, with the cutoff frequency approximated by fc=12πRCf_c = \frac{1}{2\pi RC}, where R is resistance and C is capacitance. In the Moog design, bias current adjusts effective resistance across transistor stages, allowing voltage control of the cutoff while maintaining a 24 dB/octave roll-off. Analog methods offer a characteristic warmth derived from circuit instabilities, including and component drift, which introduce subtle variations and low-level that enhance perceived organic quality. However, these same instabilities lead to limitations, such as pitch drift from temperature-sensitive components and tuning challenges requiring frequent calibration.

Digital Synthesis

Digital synthesis encompasses computational algorithms that generate audio signals through mathematical processing of discrete data, enabling the creation of complex timbres without relying on analog circuitry. These methods leverage processors (DSPs) to simulate or invent sounds, offering advantages in stability, programmability, and integration with software environments. Pioneered in the 1970s with early systems, digital synthesis has evolved to support real-time performance in hardware synthesizers and virtual instruments. Wavetable synthesis operates by storing a series of single-cycle waveforms in a and modulating the read position to produce timbral evolution. Originating from the work of Wolfgang Palm at PPG in the late , this technique was commercialized in the 2.2 synthesizer in 1982, where scanning through 2,000 waveforms allowed for dynamic, morphing sounds. between adjacent wavetable frames ensures smooth transitions, preventing audible clicks during modulation. Modern implementations, such as in software like Massive, extend this by allowing user-defined wavetables and spectral morphing for richer harmonics. Frequency modulation (FM) synthesis produces complex tones by modulating the frequency of a with one or more modulator waves, generating sidebands according to that yield rich, evolving harmonics. Developed by John Chowning in the early 1970s at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA), it was first realized digitally and licensed to Yamaha, who introduced the DX7 in 1983 featuring six operators configurable in various algorithms for routing modulation paths. This enabled the creation of metallic, bell-like, and percussive timbres that became iconic in 1980s pop and . Granular synthesis decomposes sounds into microsegments called grains, typically 10-100 milliseconds long, then recombines them with parameters like pitch transposition, time stretching, and spatial positioning to form new textures. The concept was theorized by in his 1971 book Formalized Music, envisioning sound as "clouds" of particles, and first realized digitally by and later Curtis Roads at in the 1970s using computer implementations. By randomizing grain density and overlap, it creates ethereal, non-periodic timbres ideal for ambient and , as heard in works by artists like . Physical modeling synthesis replicates the acoustics of instruments by solving differential equations that describe wave propagation in virtual materials. A foundational example is the , developed in 1983, which models a plucked as a delay line of length NN (proportional to pitch period) excited by white noise, with a in the feedback loop to simulate energy decay and damping. The core recursion is: y(n)=αy(nN)+y(nN1)2+(1α)y(n1)y(n) = \alpha \cdot \frac{y(n-N) + y(n-N-1)}{2} + (1 - \alpha) \cdot y(n-1) where α\alpha controls the decay rate, producing realistic plucked and percussive tones with minimal computational cost. This method has influenced synthesizers like the Yamaha DX7's successors and modern plugins, extending to modeling winds and reeds via waveguide networks. Vector synthesis blends multiple sound sources—often four oscillators or wavetables—using a two-dimensional , typically a , to dynamically mix their amplitudes and create hybrid timbres. Introduced by Sequential Circuits in the Prophet VS synthesizer in 1986, it combines elements of additive and wavetable methods for intuitive . In contemporary instruments like the Wavestate (released 2020), vector synthesis integrates with advanced wavetable variants through Wave Sequencing 2.0, where lanes of waveforms are sequenced and vector-mixed in real time, enabling evolving patches with up to 64 steps per sequence for intricate, performative textures. Post-2000, field-programmable gate arrays (FPGAs) have revolutionized real-time DSP in electronic instruments by providing parallel processing for computationally intensive algorithms like physical modeling. FPGAs allow low-latency implementation of finite-difference schemes for simulating complex instrument geometries, achieving real-time performance unattainable on general-purpose CPUs for large-scale models. For instance, implementations on FPGAs have demonstrated synthesis of full guitar bodies with sub-millisecond latency, enhancing live interactivity in hybrid analog-digital setups.

Sampling and Synthesis Hybrids

Hybrid approaches to sound generation in electronic instruments emerged from early tape-based manipulations, blending recorded audio with creative alterations akin to synthesis. In 1948, guitarist pioneered using the Model 200A reel-to-reel tape deck, allowing multiple layers of recorded sounds to be overdubbed and manipulated on . By the 1950s, composers in the tradition, such as and , advanced these techniques through physical tape splicing and looping in studio environments, creating rhythmic patterns and dense sonic textures from everyday recordings without traditional instruments. These methods effectively hybridized sampling—capturing real-world audio—with synthetic reconfiguration, laying foundational principles for later digital implementations. The transition to digital sampling in the late introduced algorithmic enhancements for manipulating recorded sounds, notably pitch-shifting and time-stretching. A key development was the , first described in 1966 by James L. Flanagan and R. M. Golden, which processes audio in the via short-time Fourier transforms to alter duration independently of pitch. This enabled seamless hybrid synthesis by overlaying synthetic envelopes, filters, and modulation on stretched samples, preserving natural timbres while generating novel variations. ROMplers represented a practical hardware embodiment of these hybrids, storing pre-recorded samples in (ROM) for playback with synthetic processing. The term "ROMpler," a portmanteau of ROM and sampler, emerged in the as devices like the E-mu Emulator II evolved into affordable modules. A seminal example is the JV-1080, released in 1994, which combined 8 MB of multisampled waveforms—including acoustic instruments and pads—with digital synthesis features like multi-effects and dynamic layering for 64-voice . Contemporary wavetable samplers build on this by scanning through tables of short samples as waveforms, allowing real-time morphing; instruments like the Wavestate (2020) sequence and interpolate user-loaded samples across wavetables for evolving textures. Lo-fi aesthetics in modern hybrids deliberately incorporate digital artifacts to evoke vintage warmth, with bitcrushing reducing bit depth for gritty and introducing high-frequency from . Plugins like Unfiltered Audio's LO-FI-AF (, updated in the ) emulate these effects through sample-rate reduction and spectral warping, applied to clean samples for retro character in electronic productions. In applications, hybrid techniques dominate genres like (EDM), where drum sampling layers acoustic recordings with synthesis for punchy rhythms; the Amen break from The Winstons' 1969 track has been sampled and pitch-shifted in countless EDM tracks since the , often enhanced with time-stretching. For orchestral simulations, libraries such as Spitfire Audio's Albion series (launched ) hybridize meticulously sampled ensembles with built-in convolution reverb and dynamic scripting, enabling realistic virtual orchestras for film scoring in the 2010s and beyond.

Control and Interfacing

Early Control Mechanisms

Early control mechanisms for electronic musical instruments relied on physical interfaces that translated performer gestures into electrical signals for pitch, , and modulation, often limited by the technology of the era. Keyboard-based designs were prevalent from the outset, drawing inspiration from traditional organs but adapted for electronic generation. The , introduced in 1935, featured a multi-contact keyboard that provided velocity sensitivity primarily for percussive attack characteristics rather than direct variation, allowing performers to influence the initial transient of tones through key strike speed. By the , advancements in keybed technology introduced aftertouch, where sustained pressure on keys after initial depression could modulate parameters like or filter cutoff; the synthesizer (1976) exemplified this with its wooden keyboard offering both velocity sensitivity for dynamic attack and polyphonic aftertouch for expressive sustain control. Non-traditional controllers diverged from piano-like keyboards to enable more fluid or gestural expression, particularly for glissandi and continuous pitch variation. The Theremin (1920) used two antennas as capacitive sensors: a vertical rod controlled pitch by varying the distance of the right hand, while a horizontal loop antenna adjusted volume through left-hand proximity, producing electromagnetic interference that heterodyned into audible frequencies without physical contact. Similarly, the Ondes Martenot (1928) combined a specialized keyboard with a ring-and-wire mechanism for the right hand, allowing sliding along a wire for precise glissando pitch control, complemented by a left-hand "touche d'intensité" key that varied volume and timbre through resistance changes. The Trautonium (1930), developed by Friedrich Trautwein, employed a fingerboard ribbon stretched across a metal wire, where finger pressure and position generated variable resistance to control pitch continuously, often paired with a foot pedal for volume. In early electronic organs like the Hammond, additional control came from foot-operated pedals, which provided essential dynamic expression absent in fixed-volume pipe organs. The Hammond's expression pedal, a foot-controlled variable resistor connected to the audio amplifier, allowed real-time volume swells and fades, mimicking the swell boxes of traditional organs and becoming a staple for performers seeking orchestral-like phrasing. Breath controllers appeared in some experimental wind-based electronic instruments of the period, such as prototypes exploring pneumatic interfaces, but were less common in keyboard organs; instead, foot pedals dominated for their reliability in modulating amplitude without interrupting manual play. These mechanisms, however, faced significant limitations that constrained musical complexity. Most early instruments operated monophonically, capable of producing only one note at a time due to single-oscillator designs and shared control circuits, forcing performers to prioritize lead lines over . Furthermore, the absence of standardized interfaces—such as varying voltage scales or connector types—hindered integration between instruments, requiring custom adaptations for use and limiting portability. The evolution toward more versatile control culminated in the 1960s with voltage-controlled modular synthesizers, pioneered by . These systems used low-voltage (typically 1 volt per octave) to precisely govern parameters like oscillator frequency and filter cutoff via patch cables, transforming static components into dynamic, performer-reconfigurable networks that overcame prior mechanical constraints. prototypes, building on earlier concepts like the 1945 Electronic Sackbut, enabled intricate through routing, laying the groundwork for subtractive synthesis while retaining monophonic operation in initial models.

MIDI and Digital Interfaces

The Musical Instrument Digital Interface (MIDI) standard, developed in 1983 by a consortium of music technology companies including , Yamaha, , and Sequential Circuits, revolutionized electronic music by providing a universal protocol for transmitting performance data between instruments, computers, and other devices, rather than audio signals. This digital messaging system allowed musicians to control multiple synthesizers from a single controller and integrate hardware with emerging software tools, marking a shift from isolated analog instruments to interconnected digital ecosystems. At its core, the MIDI 1.0 protocol uses a serial data stream to send short messages that describe musical events, such as note activation or parameter changes, without carrying actual sound waves. Key message types include Note On and Note Off, which specify a note number (ranging from 0 to 127, corresponding to pitches from C−1 to G9), (0-127 for dynamics), and duration; these are part of the Channel Voice Messages category. Continuous Controller (CC) messages, also within Channel Voice Messages, enable real-time modulation of parameters like volume, pan, or expression using 7-bit values (0-127) across 127 possible CC numbers. The protocol supports 16 independent channels per connection, allowing multiple instruments or parts to be addressed simultaneously on a single cable, with all messages prefixed by a status byte indicating the channel and type. Operating at a fixed rate of 31.25 kbps with 8 data bits, 1 start bit, and 1 stop bit, MIDI ensures reliable low-bandwidth transmission suitable for performance data but limits throughput to about 3,200 bytes per second. Initial implementations relied on 5-pin DIN connectors for physical cabling, with devices featuring MIDI In (receiving data), Out (transmitting data), and Thru (passing data to another device) ports to daisy-chain connections. This opto-isolated serial interface, proposed by and adopted industry-wide, used current-loop signaling to prevent ground loops and noise interference. In 1999, the and MIDI Manufacturers Association (MMA) introduced USB-MIDI as a class-compliant protocol, allowing direct connection of MIDI devices to computers via USB ports without dedicated interfaces, supporting up to 16 virtual cables for multichannel routing. Wireless variants emerged later, with (BLE) MIDI standardized by the MMA in 2016, enabling low-latency pairing between iOS devices, controllers, and hardware over distances up to 10 meters while maintaining compatibility with the core MIDI message format. MIDI 2.0, fully specified by the MMA in 2020, addresses limitations of the original protocol through enhancements like 32-bit resolution (versus 7-bit in MIDI 1.0) for finer control over parameters such as and expression, and bidirectional communication via the MIDI Capability Inquiry (MIDI-CI) protocol, allowing devices to negotiate capabilities and exchange profiles automatically. As of 2025, MIDI 2.0 adoption has progressed with features like Network MIDI 2.0 and integrations in major operating systems. It also introduces the Universal MIDI Packet (UMP) format for efficient transport over USB, Ethernet, and other media, supporting features like per-note pitch bend and controllers while preserving . The protocol's impact on music production was profound, enabling hardware sequencers like the (pre-MIDI but influential) to evolve into software-based digital audio workstations (DAWs) such as and , where data facilitates precise sequencing, editing, and automation of virtual instruments. This integration democratized composition by allowing non-expert users to program complex arrangements visually via piano rolls and grids, transforming studios from hardware-centric setups to hybrid digital environments. A notable example is the GR series of guitars, starting with the GR-700 in 1984, which converted guitar string vibrations into note data for controlling synthesizers, expanding expressive possibilities for guitarists in genres like rock and fusion. Despite its ubiquity, MIDI has inherent limitations, including potential latency from its serial nature—typically 1-10 ms per device due to baud rate constraints and buffering—which can accumulate in long chains and affect real-time performance feel. Additionally, MIDI transmits only control messages, not audio signals, requiring separate analog or digital audio connections (e.g., via XLR or ) for sound output, which complicates setups involving sound modules or effects.

Modern Controllers and Gestural Interfaces

In the , electronic musical instruments have evolved to incorporate touch-based surfaces that enable intuitive, multidimensional control beyond traditional keybeds. The , launched in 2007, exemplifies this shift with its XY touchpad, allowing users to manipulate pitch, , and effects in real-time by sliding fingers across the surface, facilitating dynamic phrase synthesis for loop-based performances. Similarly, the Reactable, developed in 2005 and popularized through its projection table, enables collaborative music creation where users place and drag virtual objects on a tabletop interface to control synthesis, effects, and sequencing, promoting gestural interaction among multiple performers. Gestural interfaces have further expanded expressivity by tracking body movements and breath, integrating seamlessly with protocols for precise parameter mapping. The controller, introduced in 2013, has been adapted for air drumming in electronic music setups during the , using sensors to detect hand gestures and translate them into drum triggers and modulation, as demonstrated in live performances and software integrations like AirBeats. The Eigenharp Alpha, released around 2010, incorporates a breath pipe for wind-like control over volume, , and articulation, combined with keywave strips for polyphonic expression, allowing performers to emulate acoustic instrument nuances in electronic contexts. Haptic and spatial controllers introduce physical feedback and three-dimensional interaction to enhance immersion. The AudioCubes, developed by Percussa and first shown in 2007 with commercial availability by 2009, consist of wireless, LED-illuminated cubes that detect proximity, orientation, and gestures to trigger sounds and visuals, enabling modular setups for live electronic improvisation. The AlphaSphere, unveiled in 2011 by Nu Desine, features a spherical design with 48 pressure-sensitive pads encircling the hand, providing tactile sensors for triggering notes, chords, and effects through natural grasping motions. Recent advancements from 2020 onward leverage AI and for more adaptive and immersive control. The Sensel Morph, introduced in 2019 and updated through 2022, uses a pressure-sensitive overlay system with AI-driven to interpret inputs as MPE-compatible controls, supporting customizable mappings for electronic performance. VR controllers, such as those in (launched 2020), allow users to wield virtual instruments like guitars and drums in immersive environments, tracking full-body gestures for spatial audio manipulation during live electronic sets. As of 2025, further innovations include AR-based gestural interfaces for instrument augmentation and controllers like NeoLightning for multidimensional gesture control. Mobile integration has democratized these interfaces, with apps like (updated iteratively through the 2020s) turning touchscreens into virtual controllers for real-time looping and synthesis, often via wireless connectivity.

Specialized Genres and Applications

Chip Music and 8-Bit Sounds

Chip music, commonly referred to as , emerged from the constrained audio capabilities of early digital hardware in video games and home computers during the late 1970s and 1980s. One foundational device was the General Instrument AY-3-8910 programmable sound generator, released in 1978, which provided three independent tone generators and a noise channel for basic melodic and percussive elements in arcade games and systems like the . This chip's design allowed for polyphonic sound within limited resources, marking an early step toward more sophisticated electronic music generation. Similarly, the SID (Sound Interface Device) chip, developed by engineer Bob Yannes in 1981 and integrated into the Commodore 64 computer launched in 1982, featured three oscillators capable of producing sawtooth, triangle, pulse, and noise waveforms, along with an analog filter for dynamic sound shaping. The SID's versatility enabled composers to create intricate scores for approximately 12.5 to 17 million Commodore 64 units sold from 1982 to 1994, solidifying its role in pioneering 8-bit aesthetics. Key techniques in chip music leverage these chips' limited channels to produce distinctive sounds. Pulse-width modulation (PWM) on pulse wave oscillators, often set to 12.5%, 25%, or 50% duty cycles, generates the bright, nasal lead melodies typical of chiptunes by varying the waveform's shape to alter harmonics. Noise channels, present in chips like the AY-3-8910 and SID, deliver percussion through pseudo-random binary sequences, simulating drums or effects with adjustable shift rates for tonal variation. These methods emphasize efficiency, often incorporating arpeggios and rapid note sequences to maximize musical complexity within 3-5 channels. The demoscene community, originating in the mid-1980s among European software crackers and hobbyists, fostered chip music innovation through competitive demos that showcased hardware limits, using early trackers like Karsten Obarski's (1987) on the for 4-channel sample-based composition. This subculture's emphasis on real-time playback and file size constraints directly influenced chiptune's modular, pattern-based style. Modern tools like Famitracker, a free Windows-based tracker released in the mid-2000s, emulate NES sound chips with precise control over pulse, triangle, and noise channels, enabling artists to recreate authentic 8-bit tracks without original hardware. Notable instruments repurposed for chiptune include the , transformed in the into a portable via LSDJ (Little Sound DJ) software, which utilizes the console's 4-channel hardware—two pulse waves, one programmable wave, and one noise—for live sequencing and sample playback. In the , compact hardware like the Bastl Kastle mini emerged, offering battery-powered digital oscillators with lo-fi modes that mimic 8-bit timbres through voltage-controlled pitch, timbre, and waveshaping for experimental chiptune production. Culturally, chiptune soundtracks from Nintendo titles like Super Mario Bros. (1985) and Sega's Sonic the Hedgehog (1991) defined the 8-bit era's energetic, memorable style, blending pop influences with hardware quirks to enhance gameplay immersion and inspire a generation of musicians. The genre's revival in the 2020s integrates into lo-fi genres, where slowed, nostalgic remixes of classic game themes serve as ambient backdrops for study and relaxation playlists, bridging retro gaming heritage with contemporary electronic music. This resurgence appears in modern game scores, such as Shovel Knight (2014), and artist works by figures like Anamanaguchi, expanding chiptune's influence beyond niche communities. As of 2025, the scene remains active with events like the Boston Bitdown festival and new album releases.

Electronic Music Production Tools

Electronic music production tools encompass a range of integrated software and hardware systems designed for composing, arranging, and refining tracks in studio environments. These tools facilitate the creation of complex soundscapes through intuitive interfaces that support layering, manipulation, and real-time adjustments, enabling producers to craft everything from ambient textures to high-energy beats. Central to this ecosystem are workstations (DAWs), which serve as the primary hubs for electronic music creation, often incorporating virtual instruments, effects, and sequencing capabilities. Prominent DAWs include , first released in 2001, which revolutionized production with its session view for and non-linear arrangement, allowing producers to trigger and layer audio clips spontaneously during composition. In contrast, , originally launched as FruityLoops in 1997, excels in beatmaking through its step sequencer and pattern-based workflow, making it a staple for hip-hop and electronic genres with efficient drum programming and loop manipulation. Hardware complements software in production setups, particularly drum machines and sequencers that offer tactile control and standalone operation. The Elektron Octatrack, introduced in 2010, represents a modern as an eight-track sampler and sequencer, enabling dynamic with real-time slicing, resampling, and for intricate construction. Contemporary hardware sequencers increasingly incorporate probabilistic algorithms to introduce controlled randomness, such as trig probability in Elektron devices, where steps trigger based on user-defined chances to generate evolving patterns without full unpredictability. Effects processors enhance production by adding spatial and temporal depth to mixes. Units like the Strymon BigSky, released in 2013, provide multi-reverb algorithms including plate, spring, and hall simulations, with MIDI control for preset switching and parameter modulation, allowing producers to craft immersive electronic atmospheres directly in hardware. Studio workflows in electronic music production rely on seamless integration of (VST) plugins, which extend DAW functionality with third-party synthesizers, effects, and processors hosted within the host software. Automation curves enable precise control over parameters like , panning, and filter cutoff across timelines, drawn as bezier curves for smooth transitions that add movement to tracks. Stem exporting streamlines collaboration and finalization by rendering grouped tracks—such as drums, bass, and vocals—as separate audio files, preserving effects and for remixing or mastering. From 2020 to 2025, trends in electronic music production have emphasized cloud-based collaboration and AI-driven tools to accelerate workflows. Platforms like Splice, launched in 2013, facilitate remote sample sharing and project syncing, enabling distributed teams to co-create beats and arrangements via cloud storage. AI mastering services, such as LANDR introduced in 2014, automate loudness optimization, EQ balancing, and stereo enhancement using machine learning trained on professional references, reducing post-production time while maintaining artistic intent. These advancements have democratized high-fidelity production, with growing AI adoption among producers—surveys indicating around 25% usage in music creation as of 2024—and by 2025, generative AI has further integrated into workflows, reducing production costs by up to 70% while raising discussions on licensing and creativity.

DIY and Experimental Practices

Circuit Bending

Circuit bending is a creative practice in electronic music that involves modifying the internal circuitry of low-voltage , such as toys and small synthesizers, to produce unintended and experimental sounds through deliberate short-circuiting and component additions. This hardware hacking technique emerged as a form of sonic exploration, often yielding glitchy, chaotic audio effects that contrast with the warmth of traditional analog synthesis by emphasizing digital instability and noise. The origins of circuit bending trace back to the mid-1960s, when composer and technologist Reed Ghazala accidentally discovered the method by shorting two points on the circuit board of a battery-powered toy amplifier, producing unexpected tones. Ghazala continued experimenting throughout the decade, but it was not until 1992 that he formally coined the term "circuit bending" to describe the intentional rewiring of existing audio circuits for artistic purposes. Core techniques in circuit bending include installing "bending switches" to temporarily short specific points on a device's circuit board, which can alter pitch, introduce , or generate rhythmic glitches by rerouting signals. Practitioners often add external components like photoresistors for light-sensitive control or potentiometers to variably adjust resistance, enabling dynamic manipulation of the sound output without requiring advanced electronics knowledge. These modifications are typically performed on devices powered by low-voltage batteries to minimize risk, with explorations conducted while the device is active to audition changes in real time. Popular base devices for circuit bending include the Speak & Spell educational toy from 1978, valued for its speech synthesizer chip that yields vocal glitches and metallic tones when bent. Similarly, vintage keyboards, such as the SK-1 sampler, are frequently modified to produce lo-fi, 8-bit-inspired sounds through pitch instability and filter alterations. Safety is paramount in circuit bending, as improper handling can lead to device failure or ; practitioners are advised to avoid devices with high-voltage supplies, opting instead for battery-operated toys where currents rarely exceed safe levels. Modifications can be non-destructive using reversible switches and clips, preserving the original functionality, or destructive through permanent , which risks irreparable damage to the circuit. The circuit bending community fosters collaboration through events like the annual Bent Festival, which features performances and workshops showcasing bent instruments, with recent iterations such as the 2025 virtual and IRL editions highlighting global artists. Key resources include Reed Ghazala's influential 2005 book Circuit-Bending: Build Your Own Alien Instruments, which provides detailed guides on techniques and philosophy, establishing foundational principles for the practice.

Modular and Custom Builds

Modular and custom builds represent a cornerstone of DIY culture in electronic music, enabling enthusiasts to assemble personalized systems from individual components. These setups emphasize expandability and experimentation, allowing users to configure sound generation, , and control elements according to their creative needs. Unlike fixed commercial instruments, modular systems foster a hands-on approach where modules can be added, removed, or rearranged, promoting innovation within a standardized . The format, introduced by Dieter Doepfer in 1995, standardized this modular approach with a compact 3U height (approximately 133.35 mm) based on the DIN 41494 industrial specification, facilitating compatibility across manufacturers. Power distribution follows a bipolar rail system of +12V and -12V DC, with optional +5V for digital components, delivered via a 16-pin connector. Common module categories include voltage-controlled oscillators (VCOs) for sound generation, sequencers for rhythmic patterning, filters for tonal shaping, and envelope generators for dynamic control, all interconnected via 3.5 mm patch cables carrying control voltages (CV) and audio signals. This standardization, inspired by earlier systems like the Moog modular from the 1960s, has enabled widespread adoption by reducing barriers to entry for builders. In the modern DIY landscape, integrations of microcontrollers like and have expanded custom controller possibilities, enabling programmable interfaces for gesture-based input, sensor-driven modulation, and software-defined synthesis within frames. A prominent example is Mutable Instruments, which released open-source module designs in the , such as the Plaits oscillator and Rings ; the company discontinued production in , but the designs remain available for hobbyists to fabricate and modify using accessible schematics and PCBs. These efforts democratize advanced features like and granular processing, often prototyped on breadboards before panel mounting. A post-2010 revival has fueled this growth, with boutique makers like Make Noise and Intellijel driving innovation through high-quality, ergonomic modules that blend analog warmth with digital precision. Make Noise's systems, such as the 0-Coast, emphasize intuitive patching for performative , while Intellijel's offerings, including the Quadrantid swarm sequencer, prioritize reliability and integration in larger rigs. This boom, marked by increased via online marketplaces and communities, has seen evolve from niche hobby to mainstream electronic music tool. Custom builds in the increasingly incorporate digital fabrication, with laser-cut acrylic or wood cases providing sturdy, affordable enclosures for module rails and power supplies. 3D-printed interfaces, such as custom knobs, faceplates, and even full skiffs, allow for of unique , like tilted panels for live performance or integrated sensor mounts. These techniques lower costs and enable personalization, often shared via open designs on platforms like . Key resources for builders include the Mod Wiggler forum, a central hub for discussions on schematics, troubleshooting, and collaborations since the early 2000s, and DIY kit providers like Modular Addict, offering pre-assembled PCBs and components for modules such as filters and oscillators. These communities and suppliers sustain the iterative, collaborative spirit of modular construction.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.