Note: A single, physical MIDI device may have up to 16 channels. The word "device", therefore, is defined as any sound, or audio-visual source that the user can control individually. This means that eg. a Kurzveil K2000 16-channel MIDI keyboard can be seen as 16 separate devices, sharing the same set of properties and capabilities. The user will see 16 instances of a K2000.
The system is based around the sequencing (playing) of individual, encapsulated audio-visual script files. Files are coded in a generic event language (see 3.2.4) and are stored in an open format, such as ASCII. Files are held in the filestore and can be accessed by anybody using the system. Multiple files can be merged, or grouped together to form a single, composite file.
Although facilities must exist which allow the user to record, or otherwise create a file, I have chosen not to consider editors in this design. Many existing sequencers have excellent recording and editing facilities, and I am assuming that these will be used. The system I am designing, therefore, must include facilities to import MIDI files creating using this method.
A file consists of a list of audio-visual events piped through zero or more event modifiers (transpose etc.), to an audio-visual output device (MIDI instrument, computer graphics server). After this they may be sent through zero or more further effects processors (audio reverb, video effects unit) to a physical output device (speaker, video projector). Each block on this pipeline may have properties which the user, or an event script must be able to update in real-time. A file may be looped, that is, it repeatedly cycles through the event list until it is terminated by the user.
The system must also allow the routing of an input device to an output device. This is another file, just the same as the previous files, except the list of events is replaced with an input stream of events. The input stream is drawn from a specific input device and converted into generic events.
Files are Sequenced
The system must then be able to play files. A filename and a start time are fed as parameters, and the system must
The system must present the user with the set of computer-controllable devices attached to the system, including MIDI output devices, sound processing devices, and even speakers if an audio routing matrix is present. The physical devices are private to the system, and the user only interacts with them via "virtual" devices which are presented by the system.
This means that the system must hold a registry of all the attached devices, including a unique ID and further parameters if required. Devices must be able to be flagged as being in used by a file and unavailable for other files which are waiting to be played. This registry must also hold details of what generic devices are mapped onto (see 2.1.3).
If a file, which is waiting to be played, cannot be allocated a device, either because all instances of the device are in use, or the device is not present on the network, then the system must suggest alternatives based on what is available.
The system allows a user to specify a filename and a start time. At perform-time, the system must then traverse a central timing schedule and ensure that files are initiated at the correct time. If the user wishes to play the file immediately, then the system must be able to snap the file to the correct position in the timing schedule with respect to eg. the nearest 8th bar. This ensures that files initiated spontaneously during perform-time are played in time with the global timing schedule.
As well as initiating files in real-time, the user may wish to update certain parameters of files in real-time. The system must therefore allow messages to be sent from a sequencer which will update parameters of a file. In practice, a user would update the parameters of blocks on the files dataflow view, add new blocks or remove blocks (see 3.8)
The user may wish to create a new file to route an input device to an output device. This must be done in real-time, and the system must provide facilities for the user to create files on-the-fly like this.
The system must contain a file server, which stores audio-visual files in a directory hierarchy. The file server must present these files upon request.
Generic Event Language
A conventional MIDI file consists of a list of events, each, of which, specifies a channel, the type of event, and some parameters. This is device specific and when the file is played on a different MIDI setup the device which has the same address may be different or not present.
In order to make files and compositions portable, the file must be specified in a device-independent language - a generic event language (GEL). This means that event messages are of the form messages_name(p1, .., pn) and are mapped onto device-specific (eg. MIDI) messages at run-time.
Files output events to devices. However, any device may or may not be present in the current setup. Therefore specifying a device as a physical address would make it impossible to move files from setup to setup. The devices need to be specified in a general form and matched at run-time. One solution would be to force the user to select a device each time the file plays. This solution would be very tedious, and so alternatively, the file could:
Solutions 1 or 2 seem most desirable, because all device mapping would be done transparently. However, these are ultimately undesirable because to be truly universal, it would require extensive standardisation, and the user would have to record technical details of all their pieces of equipment.
Also, musicians tend to have very strong emotions about the particular devices (the kit) they use. In this respect, I believe musicians would prefer to specify and control exactly which device they are controlling. This, also, would be an organic method of minimising resource clashes - musicians tend to know their kit very well. In this respect, solution 3 seems to be the most appropriate choice.
The system must therefore record the preferred device as a device class name, eg. "kurzviel_k2000", "alesis_sr16" for devices and "transpose", "velocity_fix" for event modifiers (see 3.8). A device class is, in reality, a manufacturer name and a model name. It is used to specify a certain piece of hardware - an instance, of which, may or may not be present. Event modifiers are software functions which may or may not be present on the host system. They, too, need to be specified and matched onto an actual software function.
There is no mechanism to ensure the same device has exactly the same name on a different site, but a convention of "manufacturer_model" may be put forwards. If a different name is being used for the same device class, the user will be automatically prompted when there is no literal match, and they will spot and select the alternative device name, which would be recorded as an alternative device choice.
The user may also wish to include some device-specific initialisations. This may be used to put the device in a certain mode, etc., before playback. These device-specific events must be associated with the list of device choices in the file header.
Also, as mentioned in the user requirements, an output device may be specified as a generic device, which remains present and consistent from site to site (see section 2.1.3). The system must therefore:
Provide a set of universal generic devices, which are present in all installations of the final system. This allows the user to specify a generic device, eg. "main speaker", "sub bass speaker", or "central graphics viewport". Every installation of the system will recognise these generic devices, and map them onto real devices according to the system configuration. The names of these generic devices must be standardised;
Allow the user to define their own generic devices. These may be defined in terms of existing generic devices, eg. "rear speakers" consists of the existing generic devices, "rear left speaker" and "rear right speaker", in which case the mapping can still be done automatically. Alternatively, the user may define new generic devices which do not yet exist, eg. "front right strobe". This would require manual mapping when porting to a different installation.
The user interacts with the system by means of a computer with a graphical user interface. The system must be able to recognise any number of such computers. This means that a network must be present and the system, therefore, is distributed over a number of computers on the network. The timing schedule is accessed by multiple users, and must be kept up-to-date at all times.
Use of a network implies the use of a location-transparency layer. This means that a network layer must be present which presents files and devices to applications, regardless of their actual location over the network. A device may be attached to any computer on the network.
Users interact with the system through interfaces. The most appropriate way of doing this would be through a graphical user interface, where the user uses a mouse and standard interface elements such as windows, buttons etc. This interface must capture all of the functionality already mentioned. There may be any number of interfaces distributed over computers on the network.
Specifically, the interface must contain the following components:
Figure 3.1 - An example file dataflow diagram featuring a chord pattern and corresponding graphics.
The generic event language (GEL) used by the system must be kept in an open format, ie. ASCII. The system must allow users to use custom-written modifiers when designing a file dataflow diagram.
The system must be able to output conventional MIDI data, such as MIDI clock etc., so that legacy sequencers can be attached to the system. These sequencers would be autonomous and independent of the larger system, connected only by a clock pulse.
The system must be able to export a composition as a standard MIDI file.
This document composed by Adam Buckley (email@example.com), last edited on 16-May-2002.