In the past, electronic music systems have been system-centred. This means that the system is designed to be compatible with the music equipment, rather than the user. I wish to move away from this, and design a system with the goal of making the users life easier. This requirements analysis, is therefore concerned primarily with the users needs.
The user requires a system where much of the internal complexity is hidden, allowing more creative freedom. In other words, the user is able to produce music and manage their work without needing much technical knowledge of the computer and the attached equipment. On the other hand, musicians with a better technical understanding should be allowed to take advantage of the full potential of the system.
The sort of complexity that musicians encounter using conventional sequencers is basically trying to remember which devices are located on which MIDI ports and channels. When composing a piece of music using a conventional MIDI sequencer, a "track" (eg. a bassline, or a percussion pattern) is sent to a specific channel on a specific device. It can become complicated remembering which device is where, and if a large piece of music is being worked upon, two tracks may be try to drive the same device and channel and conflicts occur. A user requirement, therefore, is that the system manages the devices, channels, ports etc. and present the user with just the devices, in the form of device names. The system must hold details of device usage status, and if there is a device conflict, then the system must handle this, and produce an alternative solution.
It may be the case that the musician has a "MIDI-controlled audio routing matrix" (Rumsey, 1994, pp102). A conventional audio mixing desk takes eg. 16 inputs and produces 2 outputs. An audio routing matrix, however, takes, eg. 24 inputs and routes them to one or more of 24 outputs. If one of these devices is present, then more of the system may be put under computer control.
In user requirements terms, the system must manage audio (and video) routing, from eg. the output of a MIDI instrument, through audio effects processors, and ultimately to an individual amplifier. Again, the user must only be presented with the devices - the system takes care of all the routing. The user would not see the audio routing matrix, rather they would use the interface to connect the output of one device to the input of another (see fig 3.1).
If the user wishes to sequence live, then the user must have access to a library of musical (and audio-visual) parts. The user must be able to easily recognise and retrieve parts.
One of these parts, or files, might represent a bassline, a percussion pattern, a lights/graphics sequence, hardware control signals or any mixture thereof. The user may want to compose a part consisting of multiple medias, eg. a bassline, with a corresponding graphics or lights sequence. The user may also want to merge files, eg. a group of percussion patterns, to form a single, composite percussion pattern.
Files can be played at any time - either immediately, or positioned in a timing schedule so that will be played in the near future. A user may also want to update a parameter of a file in real time, eg. the output instrument, transpose the pitch, fix the volume, offset the time etc.
A file may therefore be thought of as a list of events which are piped into an output instrument via certain modifiers (transpose etc.). Furthermore, if an audio routing matrix is present (2.1.1), then the file can be routed from the output instrument through audio effects processors and to an individual speaker.
The user may also want to route an input device to an output device. This will be another assignment of events to an output, except here, the events are coming from an input device as opposed to a file. The description of this routing is in fact another file, often created on-the-fly. The system must therefore include facilities to create and save files in real-time.
A user must be able to play a composition made on their own equipment setup on another, different setup. The host system must attempt to make the composition sound as close as possible to the original. This would mean that a user specifies a device to output to. When porting, the host system must map the original device specification onto the local equivalent, or offer a near-match if there is no direct equivalent.
Another user requirement is that the user may require a set of consistent devices, regardless of the setup they are using. To illustrate, imagine that a musician is working on a piece of music that makes use of quadraphonic sound. In the studio, however, the musician only has a pair of stereo headphones. In this case the user must be able to assign musical files to one (or more) of four generic speakers. The generic speakers appear consistent to the musician at all times, and are mapped onto the real speakers of a host setup at perform-time.
Another use of generic devices would be a graphics window. Ie. the user is composing graphics which will eventually be projected onto a number of screens, but at design-time the user only has a single monitor. The user sends graphics to generic windows which are merged into one screen at design-time, but send to different screens at perform-time.
A user requirement is that the system must supply a consistent set of default generic devices, and a naming convention must be arrived at. Users may also create their own generic devices, but this would require manual configuration at a different system.
Musicians must be able to play together, in time. The musicians who are playing may be using sequencers, so the system must therefore allow a group of users to have access to the system simultaneously. On the other hand, a user may be using a MIDI instrument and so the system must allow an input device to be routed to an output device.
Users interact with the system through an interface. Specifically, the interface must allow the user to:
As I mentioned in the introduction, it would be a good idea to maintain computer control over every device, and force all interaction through the computer.
However, as I found out during my research, musicians hate using computer screens and mice, and prefer to use tactile control devices. Fortunately, many of these control devices generate MIDI data, and as such, all interaction may be passed through the computer which still maintains central control, and may record the data. A further user requirement could be that the user can easily assign any input control device (sliders, knobs etc.) to the parameter of any file (eg volume, pan, a filter). The user must be able to see what is connected to what in an intuitive manner.
Through considering the problem domain, I have realised that two further requirements would of benefit in the long run:
New technology and new ideas are emerging all the time. If this system were a closed system - that is a "black box" system where the users only interact with predefined interfaces and has no access to the internal workings - then it would rapidly become outdated. I would like, therefore, to keep the system "open".
In practice, this would mean that the files are written in a recognisable format, eg. ASCII. Other ways in which the system could be kept open would be to allow users to use their own editors or sequencers and import/export MIDI from the larger system.
In keeping with the philosophy of an open system, the system should in some ways be backward-compatible. In practice, this means that musicians who are used to using a conventional sequencer should not be forbidden to connect to the larger system. This would be useful in situations where musicians are collaborating with legacy musicians. A global tempo is needed to keep both systems in time, and visual events may be produced from the MIDI data the legacy musician in generating.
The user should also be allowed to import or export as a standard MIDI file.
Figure 2.1 - Requirements specification
This document composed by Adam Buckley (firstname.lastname@example.org), last edited on 16-May-2002.