MUSICA ELECTRONICA

By 1990, the use of analog synthesizers such as Moog, Buchla, ARP and others was completely superseded by the availability of inexpensive computer-based digital synthesis techniques. Computer processors are used in every conceivable type of musical equipment. They are the backbone of digital synthesizers, effects boxes, mixers, multi-track recorders, and other basic devices used by a working musician. Most commercial recordings are now recorded, mixed and mastered using digital devices.

 

Digital music systems, such as e.g. the personal computer has roots in general purpose computers and in the development of mainframe computers dating back to the 1950s. Computer music has been largely institutionalized by the research divisions of the companies, universities, or governments that sponsored the research. It's an approach that started with large mainframe computers in places like the University of Illinois, Bell Labs, and IRCAM.

 

In analog electronic musical instruments sounds are expressed by measuring electrical voltage. The sound is represented by electrical vibrations that, when amplified, physically energize the components of the speaker system. While an analog circuit works by a measurement principle, a digital circuit works by a calculation principle. Parameters are expressed in numbers.

 

In a digital music system quantities representing frequency, amplitude, timbre, duration, and the envelope of the sound are also expressed as numbers. Numbers are entered and calculated to achieve the desired results, such as increasing the volume or changing the timbre. Instructions for making these changes can be entered through software on a computer or directly from the physical controls (e.g., knobs and switches) of the electronic musical instrument.

 

Digital Synthesis of Sounds
The computer produces sounds by means of the semiconductor oscillators on integrated circuits. The sounds can be triggered directly by the MIDI instrument or generated by the program - synthesizer emulator. A digital-to-analog converter (DAC) is used to convert digital binary codes into analog electrical waves that stimulate the speaker system.

 

Computer Control of External Synthesizers

Electronic instruments may be controlled by computer interface software. Such software is used to determine various parameters of the sounds played on the instruments connected to the computer. In this case, the computer acts as a sequencer to assist a musician-performer, or it can control many of the elements in creating of a piece of music, a process that, due to the number and variety of these elements, is beyond what a human can control in real time. Computers began to be used in this field in the late 1970s when inexpensive microprocessors appeared.

 

Digital Audio Sampling

This is the opposite of digital to analog conversion. In analog to digital conversion, the input signal from a microphone or other analog audio input is converted into a binary code which can then be freely processed on a computer. Experiments with digitizing analog sounds were started by Bell Labs in 1958.

 

 

A Brief History of Computer Music

 

The below chart presents a brief history of the development of computer music starting from the uses of mainframe computers to the advent of computer music systems and the integration of personal computers with music software and sound generating devices.

 

 

1953-53

Greek composer Iannis Xenakis employs a computer to calculate the variable speed glissandi in his symphonic composition Metastasis.

 

1955-57

Lejaren Hiller and Leonard Isaacson develop a computer program to generate data sequences that can be used as pitches and other parameters of a musical score. Using this process, they compose the first significant piece created with the aid of a computer - Illiac Suite for string quartet (1957).

 

1956-62

Iannis Xenakis writes probabilistic computer programs to help in composing music. Instead of programming the computer to compose the piece itself, Xenakis supplies the computer with pre-calculated data and uses it to compute complex score parameters for instrumental groups of various sizes. Using this method, he composes the following pieces: ST/10-1,080262 for Ten Instruments, Atrees (Law of Necessity), Morsima-Amorsima, and ST/48-1,240162 for 48 Instruments.

 

1957

At Bell Labs, researcher Max Mathews is successfully demonstrating computer generation of sound using digital-to-analog conversion (DAC) for the first time. For Mathews, it is the beginning of the many years lasting research related to computer music.

 

1959-66

Max Mathews and his colleagues of Bell Labs experiment extensively with computer-synthesized music. Their compositions range from simple demonstrations to easy versions of well-known melodies and more complex pieces. The Bell Labs team develops a range of software to automate digital processing and composition. The first is Music I from 1957, and the following ones are updated and improved versions. Music IV (1962) is used extensively in the 1960s.

 

1965

At Bell Labs, French physicist and composer Jean-Claude Risset uses the program of Max Mathews and Joan Miller to digitize the sound of the trumpet. This analog-to-digital conversion experiment is especially important because previous programs have not been able to faithfully reproduce the sound of a brass instrument.

 

1966

Max Mathews and Lawrence Rosler of Bell Labs develop a graphical interface for composing music. It consists of a cathode-ray tube on which the parameters of pitch, amplitude, duration and glissando are drawn on a grid showing the passing of notes in time using a light-sensitive pen. The result is saved and can be reproduced using computer synthesis. This is the first successful composer-friendly experiment using software for drawing, copying, deleting and editing musical values on a computer.

 

1967-69

At the University of Illinois, John Cage and Lejaren Hiller are collaborating on a massive multi-media work titled HPSCHD. The piece is intended for seven harpsichords and 51 computer-generated sound tapes.

 

1969-74

Max Mathews, F.R. Moore and Jean-Claude Risset of Bell Labs release their Music V program, an enhanced version of Bell's earlier programs to create computer-generated sounds. Responding to a call for a computer program that can be used in live performances, the group develops a program called GROOVE, which allows the computer to be used as a voltage control device for an analog synthesizer.

 

1970

French President Georges Pompidou appoints Pierre Boulez to establish and lead an institute for musical research. Consequently, in 1974, the IRCAM institute (Institut de Recherche et Coordination Acoustique/Musique, i.e. Institute for Research and Coordination in Acoustics/Music) is established. Jean-Claude Risset becomes the first director of the computer department. This international research center for computer music and new technologies is over the years a host of many projects and is a place to develop software used by composers. John Chowning begins to work on FM synthesis at IRCAM, and Miller Puckette creates a computer program called Max in IRCAM in the mid-1980s. This program is developed later into graphical programming environment Max/MSP, for real-time sound processing and synthesis. Max/MSP later becomes a widely used tool in electroacoustic music. Many techniques related to spectralism, such as Fast Fourier Transform analysis, are coming into use thanks to IRCAM's technological contributions. In 1990, IRCAM begins a program of courses for young composers in computer music and composition.

 

1974-75

The first commercially available portable digital synthesizer Synclavier, developed by composer Jon Appleton and engineers Sydney Alonso and Cameron Jones, is introduced. It is a performative instrument that has the ability to store soundtracks that can be used interactively while playing the keyboard in real-time.

 

1975-82

Mini- and microcomputers are beginning to be used as control devices for analog synthesizers. The development of microprocessor technology has allowed the use of "chips" that synthesize sound in musical instruments and professional synthesizers. The first fully digital synthesizers are introduced to the commercial market. Computer music programs are being used with personal computers from companies such as Apple, Commodore, and Atari.

 

1976

The 4A digital audio processor was completed in IRCAM by a team led by Giuseppe Di Giugno. Additional versions of this digital synthesizer were developed between 1976 and 1981 as the 4B, 4C and together as the 4X series.

 

In the United States, composer Joel Chadabe becomes the first user of the Synclavier digital synthesizer produced by the New England Digital Corporation. However, he does not use a keyboard. Instead, in his first Synclavier project, he commissions Robert Moog to create Theremin-based controllers for this synthesizer. The Theremin in this case is used not as a sound producing instrument, but as a controller of the computer through the use of frequency voltage converters.

 

1979

The IRCAM team, led by Xavier Rodet, is completing the first version of a computer program called Chant, which creates synthesized sounds based on computer models of a singing voice.

 

The Fairlight CMI (Computer Music Instrument) digital synthesizer is developed in Australia and launched in 1979. Providing a complete set of sound modeling features, it comes with its own computer, dual eight-inch drives, a six-octave touch-sensitive keyboard, and software for creating and manipulating sounds. Its most innovative feature was an analog-to-digital converter for processing incoming audio signals from analog sources. It is the first digital sampling instrument on the market. An external audio signal can also be used as a control signal, similar to earlier voltage-controlled synthesizers. It also has a sequencer, 400 preset sounds, and the ability to create new tonal scales tuned in steps as small as one hundredth of a semitone. The device records live tracks and can combine them with previously recorded ones. In the studio, the system can control the synchronization of up to 56 voices on an eight-track tape recorder.

 

1980

Casio introduces the first portable digital musical instrument, the Casio VL-Tone. This small monophonic instrument with a two and a half octave mini-keyboard includes presets of rhythms and instrument voices, and allows the user to save sequences of up to 100 notes. It is programmed by entering an eight-digit number to select the waveform and envelope. The three sound waves can be modulated by a low frequency oscillator. It is the first affordable digital synthesizer.

 

1981

E-mu introduces the Emulator, a digital sampling keyboard. It has eight-voice polyphony and a real-time looping function.

 

The first computer composition composed by Pierre Boulez at IRCAM, Répons, has its premiere at the festival in Donaueschingen. It is created using the 4X software synthesizer developed at the institute. The piece is performed by twenty-four musicians, and the soloists' sounds are modulated by a synthesizer and sent to a network of loudspeakers in the concert hall.

 

1981-83

IBM and Apple Computers personal computers are starting to dominate the personal computer market. Basic and inexpensive software packages for making simple music on these machines are starting to emerge.

 

1983

Casio introduces the PT-20, a two and a half octave monophonic instrument. It includes seven programmed voices including piano, organ, violin, and flute, and offers 17 rhythms. The preset chord algorithms are played by buttons labeled with chord symbols such as major, minor, and seventh. Using a feature called "automatic judging chord generator" one can play the keyboard with one finger, and the instrument will automatically select and play an accompanying chord. The keyboard can also store up to 508 notes for playback. The instrument is a breakthrough in terms of the way the Casio engineers use the computer as an interpretative and accompanist tool for the user.

 

Synclavier II is introduced. It has characteristics similar to the Fairlight CMI, but is designed more as a musical instrument than a computer. The control panel has dozens of buttons arranged by functions such as volume, envelope, recorder control, vibrato and color bank. The instrument features 16 digital oscillator voices and 16-track recording. The digital sampling function can digitize analog sounds using a higher frequency range than the Fairlight instrument. Its digital recorder can store a sequence of 2,000 notes in memory and can be extended to record 15,000 notes.

 

Kurzweil Music Systems introduces the K250, the first keyboard to use digital samples of acoustic instruments as a sound source. The samples stored in ROM reproduce piano, strings, choirs, drums and other acoustic instruments with great clarity.

 

Syntauri Corporation presents its alphaSyntauri system, designed to enable music creation on a desktop computer. This system uses the Apple II computer as its brain, one or two recording discs, and a video monitor. The digital audio oscillators are contained in a printed circuit board manufactured by Mountain Computer. Syntauri supplies software, a four- or five-octave keyboard and an interface. The alphaSyntauri instrument does not have as much potential as the Fairlight CMI or Synclavier II, however it marks the beginning of a development towards less expensive electronic music systems based on personal computers.

 

1984

MIDI is introduced as the standard communication language for synthesizers and personal computers.

 

Apple Computer introduces the Macintosh computer, which will soon become the desktop computer used by most electronic musicians. Its graphical interface and pictorial operating system are better suited to music applications than earlier personal computers.

 

1985

Mark of the Unicorn (MOTU), the software developer, introduces Performer (later Digital Performer), one of the first MIDI sequencing programs for the Macintosh.

 

IRCAM releases its first music software for personal computers. It is created by a team led by David Wessel. In addition, the library of computer functions for computer aided composition is completed by Claudy Malherbe, Gerard Assayag and Jean-Baptiste Barriere.

 

1986

Composer Laurie Spiegel creates a computer program called Music Mouse-An Intelligent Instrument for Macintosh, Amiga and Atari. Music Mouse is more of a music creation tool than a programming environment. Provides a choice of several possible musical scales, tempos, transpositions and other controls that are played with a special "polyphonic" cursor that is moved with the mouse over a visual grid representing the two-dimensional pitch range.

 

1988

Korg introduces the M1 Music Workstation, a computer synthesizer with a built-in display, sequencer, drum machine, digitally sampled sounds and digital effects. Approximately 250,000 copies were sold, a breakthrough in the digital music system industry.

 

IRCAM releases the first version of Max, a graphical programming language for music applications created by Miller Puckette. It is developed to support real-time interaction between performer and computer. Provides a wide range of virtual patches and controllers for sound processing.

 

1990

The musician-friendly version of Max is introduced by Opcode and its design is improved by David Zicarelli. This Macintosh software is an immediate success and remains the most widely used real-time electronic music creation program for three decades.

 

Symbolic Sound introduces a two-processor electronic music system based on a microcomputer. The software is called Kyma and works with a set of sound processors - the original device called Capybara. Like the Max, but based on its own hardware, the system is perfect for real-time audio processing during live performances.

 

 

 

 

 

 

 

 

top

<previous

next>