Aim of Project and Background

The Evolution of Music Technology

Music equipment has traditionally been of a manual nature - the musicians use their fingers, arms, feet etc. to control a device. During the 1960’s and 70’s, however, musicians witnessed the widespread use of electronic, "analogue" musical instruments. These devices used analogue voltage signals to control pitch, amplitude, waveform etc. of a sound. There was still a certain degree of manual control, as the musician would use various knobs and sliders to control the sound and music. These devices were also often capable of "sequencing", that is they could record or be programmed with a musical pattern which was played in time via the use of an external timing signal, supplied by a "master" device. Analogue synthesisers were therefore capable of being controlled remotely. Remote control could also include control voltages for pitch etc., and a "gate" voltage to control when notes were "on" or "off". This meant that a single musician could be in control of several instruments at the same time, without actually having to "play" them all at the same time.

The MIDI standard was the next evolutionary step, which was introduced during the 1980’s. MIDI stands for Musical Instrument Digital Interface, and consists of control events which trigger MIDI devices. Analogue equipment uses a separate wire for each component which is being controlled, and often still relies on manual control for sliders etc. With MIDI, however, only a single cable is used, which carries all the digital information required. This single data stream can control any controllable property of the MIDI device - manual control is no longer necessary. Unlike some analogue ones, nearly all MIDI synthesisers are "polyphonic" which means they can play several sounds simultaneously. A MIDI device may also possess up to 16 channels, which act as 16 independent synthesisers. MIDI devices can be attached to a network, but with a maximum of 16 channels. Using multiple MIDI networks (referred to as "ports" by sequencers) solves this problem.

Many MIDI instruments can produce very high quality, realistic sounds. Furthermore, the MIDI specification has been used to allow the control of a whole host of audio-visual devices, such as samplers, audio mixing desks, lighting desks and computers. MIDI devices also exist which are used for input, such as MIDI keyboards, sliders (which produce a scalar output), drum-kits, saxophones, and guitars as well as a few more elaborate ones such as MIDI "batons" which are used by performance artists and are waved around in 3D space, or voice-to-MIDI converters.

MIDI could be thought of as an audio-visual control language. Many hardware devices, including computers, can communicate via MIDI. A MIDI setup can be controlled centrally and the whole sequence of events can be recorded and stored. It therefore becomes apparent that modern music, computer and audio-visual technology hold a great potential.

The Importance of the Interface

The only thing stopping the musician gaining full access to this potential is the human-machine interface. It is this interface which permits the user to access the functionality of the equipment and compose musical pieces - or not, as the case may be. It is vitally important that the musician has control over the equipment, and this control must be captured in an intuitive interface.

An Example Sequencer

The mainstream solution is to use desktop computer-based "sequencers" which drive all the equipment. An example of this is "Cakewalk", published by Twelve Tone Systems Inc. (1). Cakewalk’s main view can be seen in figure 1.1.

Here, the user has created a number of music "tracks". Each track has a name ("Slap Bass 2") and consists of a list of MIDI events which are sent to a MIDI device channel specified by "Chn" (channel #) and "Port" (MIDI network #). "Patch n" tells the device channel to switch to instrument n. The user has access to other modifiers such as transpose (Key+), velocity (how hard a note is hit), volume, time offset, pan (stereo position) etc. Tracks are normally created by recording a MIDI input device. Tracks can be edited or created by hand via a staff editor as seem in figure 1.2, by using a "piano roll" editor as seen in figure 1.3, or by viewing it as a list of MIDI events. This sequencer also has other interface components, such as "Faders" which send control data to a device channel, as seen in figure 1.4.

Figure 1.1 - Cakewalk’s main interface

Figure 1.2 - Cakewalk score editor

Figure 1.3 - Cakewalk piano roll editor

Figure 1.4 - Cakewalk faders view

Problems With This Type of Sequencer

The solution I have just described is used on a home computer or in a recording studio. Although it is effective and widely used, I find there are a few problems with this approach:

Present State of the Art in Auditoriums

Nowadays, many auditoriums have impressive sound systems, lights and computer graphics projections. It would appear, however, that these systems are practically distinct. Where a connection does exist, it usually only consists of an audio line, which is converted into graphics or lighting events according to it’s amplitude. This has limited capabilities. If MIDI or another suitable control language were used instead of this audio line, then much more information could be passed. This is particularly true if the performers are already using MIDI devices. For example, audio-visual events could correspond to actual musical events rather than audio levels and synchronicity would be preserved - audio-visual events would be perfectly in time with the music.

Research

As part of my research, I have spoken to a few computer-based musicians to get a better understanding of the problem domain. Here is what I learned:

Methods of Production

A computer musician composes music by first creating "parts", which are musical riffs such as basslines, leads or percussion patterns. These parts are usually created by one of the following methods:

  1. Record a MIDI keyboard (or another MIDI instrument) and then edit or fine-tune the note data using a score, piano or event list editor (see figures 1.1 to 1.3).
  2. Input the notes in "manually" using one of the three editors mentioned in 1.3. This method is slightly more tedious, but not uncommon.
  3. Use arpegiators, which are functions often provided by MIDI keyboards. The musician plays a chord, and a melody is automatically generated. A simple arpegiate is to just cycle through the keys the musician has pressed. However, the musician can program any arpegiate they wish. From what I have seen, musicians use MIDI devices with pre-defined arpegiates to do this, although a computer could be used.

Once these parts have been created, they are then "arranged" over time. They are copied, edited and positioned in some sort of schedule. This, apparently, is the most boring aspect of composition.

Personal Preferences

The computer seemed to be the least used component of the MIDI studio. When I asked, I found out that musicians hate using computer screens and mice. Rather, they prefer to use tactile devices, including piano keyboards or anything with buttons, knobs and sliders on it. They argued that this method allows ideas to ‘flow’ more easily, and using the computer seemed to be the boring part of it all.

Another comment made was that a lot of MIDI devices have many functions on them which are completely ignored by the computer sequencer (eg. arpegiate). The only way of accessing this functionality was through the control panel on the device, and this may be another reason why the musicians tended to use MIDI devices instead of the computer.

Central Computer Control

During the research, I have seen a number of MIDI setups. In general, these setups consisted of a computer, some MIDI devices and some analogue devices. A 16 or 24 channel mixing desk was used to mix the audio outputs into a single, stereo output.

These setups combined MIDI equipment (including the computer) with analogue equipment (including the mixing desk). This means that the musician can only record a certain fraction of their work, because changes to the analogue equipment cannot be recorded by the computer. Switching between saved pieces therefore requires the manual reconfiguring of certain devices.

If every device, including the mixing desk, was MIDI (or computer) controllable, then every aspect of the music could be recorded. Switching between saved pieces would be trivial, and the musician would, in theory, only need to interact with the computer.


This document composed by Adam Buckley (adambuckley@bigfoot.com), last edited on 16-May-2002.