Conclusions
Problems
not addressed
Problems which have not been addressed by the previous design
include:
- An example MIDI instrument has 16 channels, and is seen
as 16 instances of a device class. The instrument would
require 16 audio outputs to be routed transparently.
However, most MIDI instruments have 2, (or sometimes 4)
audio outputs. This impedance-mismatch would mean that
restrictions would be imposed when routing audio signals,
and may have to be resolved manually.
- If a MIDI input device was routed to a MIDI output
device, it would be sent through a dataflow structure.
The signal would require processing, and may require the
use of a network, both, of which, would cause delays.
This means that a musician using an input device may not
hear the result in real-time. A solution to this problem
would be to provide functionality to allow an input
device to be routed to an output device without using any
other part of the system.
- Certain MIDI devices can use samples or custom-created
instruments. Samples may or may not be present on all
instances of that device. Samples may also be assigned to
different note numbers, and so some sort of sample
management layer could be introduced to provide
transparent use of samples. This would mean that
composition would be stored along with all the required
samples, the samples perhaps being stored in a universal
format converted at run-time.
The
Auditorium Revisited
The system described in this report improves on the current
state of the art in auditoriums by presenting a single system
image. No longer are distinct audio and visual systems used -
these are simply components of the larger system. A single,
global clock would be used - created by the performers on stage.
The distinction between the jobs of the musicians, lighting
engineer, sound engineer and computer graphics controller are
blurred, giving the artists much more creative freedom.
Furthermore, impedance-mismatch problems incurred when artists
port their work from studio to auditorium would be minimised, and
may even be resolved on the fly, at perform-time.
Further
Developments
The system designed in the project is open to a large number
of further developments, a few of which are outlined here:
- Icons Files could be assigned icons which appear
in the filestore view, and the schedule view running on a
sequencer. The icons could be user designed, or generated
automatically from the data they encapsulate. Animated
icons could be programmed to move in time with the music.
- File priorities Certain files, or certain
components of files could request priority for use with
certain devices. This would mean that resources are
allocated more sensibly, and would be a better solution
that the first-come-first-served method already in use.
- Key independence Currently, musical events are
described as specific notes. This would lead to
key-mismatch, which is only partially resolved by using a
transpose modifier. A better solution would be to
describe notes in terms of their position within a key,
which would be mapped onto a specific note using a global
key at perform-time.
- Console Window A console window could
be placed on the sequencer which allows users to call
functions of any device attached to the system. User and
super-user modes could be used to restrict which
functions/devices are accessible.
- Users own files Files are arranged into groups so
that users/groups have private ownership of files. For
example, the drummer could own all the
percussion patterns. The super-user would have access to
all files.
- Active dataflow diagrams A view of a files
dataflow structure on the interface could make use of
animation if the file was currently playing. Links could
be given graphical properties to denote activity, and
signifying the sort of information which was flowing.
This would require the presence of demon processes
running within the filestore, which relayed status
information to the sequencer.
- Schedule Optimisation The algorithm used in the
scheduler is at present first-come-first-served. A more
intelligent algorithm could be used to constantly keep
the schedule in an optimised state.
- Scripting language At present, functions are
implemented as executable code with a known interface.
Developing new functions would require specialist
knowledge. A computationally complete scripting language
along the lines of eg. Microsoft Visual Basic would
increase the usability of the system enormously. Users
would be able to interact with dataflow diagrams and
create new virtual devices and event modifiers
on-the-fly. Users would also be able to create event
lists as an algorithm instead of a static list of events.
This development would require knowledge of object
orientated programming environments.
- New devices New virtual devices and file types
could be created which enabled the control of:
- Hard disc recordings;
- Compact disc/DAT player;
- Laser disc player, or digital video player;
- Other hardware control devices, eg. fireworks,
moving structures.
- Editors Editors which will allow the user to edit
or create audio-visual files may be created. The
advantage of using custom-made editors (as opposed to
existing sequencers) is that the editor will be more
in-line with the whole system, in the sense that any type
of audio-visual event may be sequenced in an intuitive,
recognisable manner. Custom-made editors may even allow
users to tweak audio-visual files at
perform-time.
This document composed by Adam Buckley (adambuckley@bigfoot.com),
last edited on 16-May-2002.