Abstract

In general, composers are used to writing music.  However, in today’s world, a large number of exciting multimedia output systems exists, such as 2 or 3 dimensional graphics, computer controlled-lighting effects, MIDI and digital audio synthesisers.  Composers should be able, if they wish, to make use of this new technology as new types of media for their compositions.

However, the practicalities of controlling such technology is prohibitively difficult, and would require the composer to have the knowledge and skills of a computer scientist.  This project addresses this issue by building a system using Java and CORBA which gives the composer simplified control of such technology.  Along the way, the project also investigates the multimedia and networking capabilities of Java, which are presented in a ‘tutorial’ form.

 

 

 

 

 

 

 

Cover illustration by Guy Buckley

Contents

Context

Problem Domain__________________________________________________________ 4

Candidate Technologies

Object Orientation__________________________________________________________________ 7

Java____________________________________________________________________________ 10

CORBA_________________________________________________________________________ 14

MIDI___________________________________________________________________________ 17

Inferno__________________________________________________________________________ 18

RM-ODP_________________________________________________________________________ 19

DCE____________________________________________________________________________ 19

DCOM__________________________________________________________________________ 20

Existing Work

Supporting Real-Time Multimedia Applications with Distributed Object Controlled Networks [Oliver, 1998] 21

A Distributed Object Platform Infrastructure for Multimedia Applications [Coulson, 1998]____________ 22

MAESTRO_______________________________________________________________________ 23

Where the Existing Work Fails (The "Gap")___________________________________ 23

Objectives

Project Specialisation______________________________________________________ 26

Build a demonstrator______________________________________________________ 28

Design___________________________________________________________________ 30

Implementation

Learning Java 1.2________________________________________________________ 33

Graphics 2D_____________________________________________________________ 37

JavaSound______________________________________________________________ 46

Sockets_________________________________________________________________ 50

Example: ThreeBlindMice_________________________________________________ 57

Remote Method Invocation________________________________________________ 69

Java Native Interface______________________________________________________ 78

Native MIDI____________________________________________________________ 83

Example: Minim_________________________________________________________ 88

Java IDL______________________________________________________________ 102

Conclusion______________________________________________________________ 115

Project Appraisal________________________________________________________ 115

The Practical Stage______________________________________________________ 119

Further Work___________________________________________________________ 119

References_______________________________________________________________ 124

 

Context

Problem Domain

Distributed Multimedia Output Devices

In the room T207 at the University of York, UK, we have a small network of computers. These include a PC running Windows 95 and a PC running Windows NT.  In addition we have two Indigos, two Indies and two 02s - all manufactured by Silicon Graphics Inc (SGI).  All of these machines embed a number of multimedia output devices, which include:

·        MIDI Synthesis:  All machines are capable of high quality wavetable-based MIDI synthesis.

·        Digital Audio:  The PCs have a stereo audio output and the SGI machines have a quadraphonic audio output.

·        Video:  All machines possess a monitor.  The machines are capable of producing graphics which may consist of text, vector or bitmap-based artwork. Furthermore, the university own equipment which allow the video output of the machine to be projected onto large screens.

In addition to the above, we have devices which are separate from, but controlled by the machines.  These include:

·        MIDI Expander Modules:  We have a Roland SC-7 Sound Module which is a General MIDI expander, and a Kurzweil K2000R which is a general purpose sampler, effects unit and MIDI synthesiser.

·        Stage Lights:  We have two sets of stage lights - each consisting of four coloured lights.  They are connected to a controller, which is in turn connected to a computer via a MIDI cable.  The brightness of each of the eight lights can be individually controlled.

As well as keyboards and mice, we have various specialised input devices:

·        Audio Input:  each machine has a stereo line input.  The SGI machines also have a digital audio input.

·        MIDI Creator:  the MIDI Creator is a unit which takes a set of scalar voltage inputs and outputs them as MIDI messages, which a computer may process.  Many input devices exist including distance sensors, floor mats and triggers.  Furthermore, novel input devices have been ‘invented’ by members of the department which include a dataglove and a handheld piece of foam which outputs two control voltages relating to each end of the foam.

The computers and associated input/output devices hold a great deal of potential for realising multimedia compositions or performances.

To aid in understanding the nature of such work, here follows two brief examples:

·        An electroacousic piece, where abstract entities are realised and interact using the media of sound, graphics and lighting effects.

·        Taking a piece of existing ‘conventional’ music (e.g. jazz/blues/dance) and constructing a set of graphical/lighting elements which accompany each part of the original piece.

The Problem of Controlling the Hardware

The hardware exists to support any of the above compositions, but the problem of actually controlling the hardware is, however, very difficult.

For example, a MIDI sequencer offers the composer all the tools necessary to create music, and is the primary candidate for controlling the musical aspect of a multimedia composition.  In this case, the composer could even use the MIDI sequencer to control the lights - the lights controller is, after all, just another MIDI device.  However, a MIDI sequencer would model the lights as a musical instrument and the composer may find this mapping unnatural and difficult to use.

Furthermore, a MIDI sequencer would have very limited use controlling associated video elements.  Although graphics software could be written running on a computer which responds to incoming MIDI messages, only the simplest instructions could be encoded as MIDI messages and the MIDI to graphics mappings would be prohibitively unnatural and difficult to use.

Using a MIDI sequencer would mean that the composer would have to resort to incorporating another, more suitable, system in order to create the video aspects of the composition. The composer would then have to decompose the overall score into parts suitable for specific multimedia types.  The parts would be implemented using distinct, autonomous and heterogeneous systems.

This would have the adverse effect of fragmenting a single, unified score into several parts, each, of which, would be encoded using a language suitable for the corresponding multimedia type. Each one of these ‘sub-scores’ would become incompatible with other media types, difficult to create, even more difficult to change and highly complicated. Without centralised control, synchronisation becomes an issue and it becomes apparent that the role of the composer is transformed from that of an artist to that of a computer programmer.

Furthermore, the problem would become many times more difficult should the composer want real-time control of the output objects, perhaps using various input devices.

The purpose of this project, therefore, is to investigate a more enabling technology which would give the composer centralised, simplified and unified control of such hardware.

Candidate Technologies

There are many different technologies, all of which, could, potentially, solve the problem of giving a composer control of distributed multimedia objects.  The goal of this section is to investigate distributed communication architectures.  It does not investigate multimedia technologies.

Object Orientation

Object orientated technologies are well-suited to the task of realising musical or multimedia-based compositions because of their ability to abstract and because they support a more natural model of a musical situation.

A More Natural Model

For this example, imagine that a composer works with an orchestra.  The orchestra is composed of a number of instruments (and players).  Each instrument has certain properties and performs a number of actions.  For example, some of the properties of a piano are:

·        The type of piano: is it an upright piano?  A grand piano?

·        Status of the dampers: what is the state of the foot-pedals?

And some of the actions a piano support are:

·        Play a note: with pitch, loudness and duration parameters

·        Set a foot pedal: a certain foot pedal may be set on or off

This can be modelled beautifully using object orientated technology.  A piano can be modelled using a piano ‘object’, which has a playNote(pitch, loudness, duration) method and a setFootPedal(pedalNumber, state) method.  The object would also have methods which return information about it’s properties, such as getFootPedalStatus(pedalNumber),  getPianoType().  Any of the instruments found in an orchestra may be modelled in this way.

In object orientated theory, every object is an instance of an object class, which could be considered to be an object’s template or mould.  Inheritance is the mechanism by which an object class inherits and extends the properties and methods of another, existing class.  The new class is called the subclass and the existing class is called the superclass.

The inheritance mechanism is very helpful when it comes to modelling a musical environment. For example, in our orchestra, we might create a drum class, which supports the action of being hit.  Accordingly, the drum class defines a method hit(hardness).  A kettleDrum class is a subclass of the drum class and also has a hit method.  So too is a snareDrum.  A snareDrum has a hit method, but a snareDrum also adds a method of it’s own - brush(duration) - which represents using a brush to play the drum for a certain length of time.

Using inheritance means that we only define the mechanics of our hit method once - in the drum class.  The snareDrum class would only need to define the brush method, and the kettleDrum would require no further work at all[1].  Inheritance means that code need not be duplicated, which could result in inconsistent code.  Inheritance also enforces an “is a type of” relationship between objects, which add structure and logic to our model of the orchestra.

Furthermore, object orientated theory uses polymorphism, which is the ability to substitute a subclass in the place of a superclass.  In our orchestra, this means that wherever we find a drum, we could replace it with a snareDrum or a kettleDrum.  Because both subclass drum, they can both be used in place of a drum.

Object Orientated techniques are also good for modelling electronic music systems.  These systems often generate sound by passing source signals through various modifiers, ultimately to an audio output.  This ‘pipeline’ view of the system is very suitable for object orientated modelling techniques.  Software objects provide a natural model of the signal generators and signal modifiers, both, of which, have very clearly defined interfaces.  The signal flow may be modelled using one of many inter-object communications techniques, such as ‘streams’ or ‘sockets’.

Abstraction

Our snareDrum object is an abstracted version of a real snare drum - i.e. the principle characteristics of the drum have been identified and combined to form a ‘virtual’ snare drum.

We are able to choose how detailed our abstraction is, and, in our case, we have chosen a very simple model - a snareDrum can only be hit or brushed.  A more detailed abstraction could model the tightness of the skin or whereabouts on the skin the drum is hit. Using abstraction means that we can make our model as simple or as complicated as we like.

However, when it comes to actually implementing our snareDrum object, abstraction become even more valuable.  The interface of an object or a class is the set of actions and properties it implements.  In a programming language, these are represented as method calls and/or member variables.

Providing our snareDrum class adheres to our predetermined interface, the internal workings of the class may be implemented using any technique.  For example, the snareDrum may be implemented by triggering off the snare drum sound of a MIDI controlled drum machine.  Or by a sophisticated physical modelling algorithm which produces an audio output.  Or by playing back a digital recording of a snare drum.  It could even be implemented using a real snare drum with a drum stick held by a  robotic arm!

Nevertheless, our snareDrum object makes a snare drum sound when prompted.  This hiding of internal detail is incredibly helpful in reducing the overall complexity of a system.  The composer need not worry about initialising the snareDrum object nor passing complicated arguments to a function.  The composer only sees what he/she is interested in - the hit or the brush method - i.e. the snareDrum’s interface.

In short, abstraction as provided by object orientation would be helpful in developing multimedia compositions because it unifies heterogeneous musical/multimedia objects and it hides the complexity of the system from the composer.

Java

Java is a modern object-orientated, platform-independent programming language developed by Sun Microsystems.  "Compilers and run-time systems for virtually any hardware platform and operating system are available" [Vogel, 1998:p2]  Java is particularly useful when writing programs for the Internet - many web browsers contain Java interpreters and small Java programs ('applets') are downloaded and executed transparently.  Java's Remote Method Invocation enables the invocation of methods on remote Java objects, enabling distributed computing applications.  RMI is a proprietary standard, specific only to Java.

Java Beans

The most recent addition to the Java programming model is Java Beans - Java's component model.  A component is a software object with an well-known interface which conforms to rigorous standards imposed by the particular component model.  The result of this is that objects are guaranteed to be interoperable, as Vogel [Vogel, 1998:p3] states, “The component model allows programmers to combine the functionality provided by a number of Java classes into a single component.  Components can easily be put together, even by non-programmers, to achieve new functionality".   Many third-party software houses are developing commercial components which implement various functions.  This removes the need for the programmer to 'reinvent the wheel'.

Java Real-Time Support

"A system operates in real-time to the degree that those of it's actions which have time constraints are performed with acceptable timeliness" [Jensen, 1996:p1]  The real-time operation of a system may be permitted by either implicit means - hardware resource over-capacity, good fortune - or by explicit means - real-time resource management, quantified behavioural requirements.  "The guarantees and APIs provided by the standard Java platform do not meet the needs of real-time systems." [Sun 1998a:p1]

Two separate expert groups have formed to address these issues.  The 'Real-Time Java Expert Group' has been set up as a part of Sun's 'formal process' for developing extensions to the Java language.  This group is concerned with "providing a set of simple, low-level primitives out of which real-time systems can be built".  [Foote, 1998:p1]  The group confirms that the extensions are appropriate for developing distributed multimedia applications, but point out that Java is not particularly good at controlling multimedia devices attached to a computer.  "The only issue I know of is an API for direct hardware access" [Foote, email]

The 'Real-Time Java Working Group' has established itself independently after questioning Sun's monopoly-like attitude towards the Java languages.  The results of this group are expected to be very similar to that of the first group.  The Working Group is concerned with developing a real-time core API "which offers minimal latency" [Nilsen, 1999:p1], where latency means the time taken to respond to an asynchronous event.  This group also confirms that the extensions are suitable for distributed multimedia applications, but point out that RMI may prove to be a real-time bottleneck.  "You should recognize that there are aspects of RMI that hinder interoperability between virtual machines" [Nilsen, email]

Java Media Framework

The Java Media Framework is an extension to the Java programming language, developed by Sun, Silicon Graphics and Intel. “The Java Media Framework API (JMF) specifies a simple, unified architecture to syncronize and control time-based data, such as audio and video, within Java applications and applets.” [Sun, JMF]  It supplies a set of functions enabling the playback of a number of audio and video formats including AIFF, MIDI, MP3, WAV, M-JPEG, MPEG-1, AVI and QuickTime.

Further investigating would be required to confirm that streaming (e.g. MIDI) from one machine to another could be performed with a delay short enough to constitute real-time.

Java Sound API

The Java Sound API is a component of the Java Media framework and provides functions which produce digital audio.

“The Java Sound API specification provides low-level support for audio operations such as audio mixing, audio capture, MIDI sequencing, and MIDI synthesis in a framework that promotes extensibility and flexibility.” [Sun, JavaSound]

Java Sound routines exist that can record and playback sound samples as well as GM MIDI synthesis; which is 24-voice and rendered in software from wavetables.

Jini

Jini is a standard introduced very recently by Sun Microsystems.  It is a Java-based network-enabling protocol and requires a Java Virtual Machine (JVM) and the RMI protocol in order to run.

When a device supporting Jini is connected to a network, it declares its presence by broadcasting a 512byte packet.  A ‘lookup server’ on the network notices this and interrogates that device to establish its capabilities.  For example, a scanner might report that it can scan a certain set of page sizes, at a certain set of resolutions.  The lookup server remembers these details and relays them to clients, when they are searching for specific types of devices.

A Jini-enabled device "can request any software it needs from a server, doing away with the need to install their own device drivers". [JavaVision, February 1999:p5]  However, the Jini standard need not necessarily relate to hardware.   A software service (c.f. 'component') may also be encapsulated using Jini, and Jini can also be used to act as a 'wrapper' for legacy devices which do not support the JVM.

The Jini core is very small, approximately 50k, but the attractive point is that "these small parts work together to create a complex system".  [Williams, 1998:p2]  Devices which will support Jini range from "Personal Digital Assistants, to word-processing to kitchen appliances".  [JavaVision, February 1999:p5]

In Summary

Java is a programming language and a platform.  It is a candidate technology for realising multimedia compositions because of the following features:

·        It is object orientated: these advantages have been described above.

·        It is distributed: Java supports sockets, remote method invocation and CORBA.  This means that a multimedia composition may be realised using the resources of many computers.

·        It is portable: any code written using Java can (in theory) be executed on any type of computer which supports the Java platform.  This means that when realising a multimedia composition, the composer can make use of (nearly?) all existing computer-based resources and is not restricted to using a single type of machine.

·        It has multimedia capabilities: Java comes complete with a powerful 2D graphics library.  A 3D graphics library and a digital audio/MIDI library are also available.

CORBA

The Object Management Group (OMG) is a standards organisation specialising in providing a model of distributed object-orientated programming.  This is expressed in the Object Management Architecture (OMA) developed  by the OMG.  The OMA aims to clarify understanding object-orientated theory by providing abstract definitions of object-orientated concepts, for example – an encapsulated object, properties of objects, methods ‘suffered’ by objects, the component model.

An Object Request Broker (ORB) is also defined in the OMA.  The ORB is the central component of the distributed object model and provides a data/control bus between remote objects.

The Common Object Request Broker Architecture (CORBA) is a formal specification of an ORB according to the OMA philosophy.  “CORBA is the specification of the functionality of the ORB, the crucial message bus that conveys operation invocation requests and their results to CORBA objects resident anywhere, however they are implemented.”  [Vogel, 1998, p26]  CORBA makes use of OMG’s Interface Definition Language (IDL) which is used to formally specify or describe the interfaces of objects.

Objects communicate via the ORB, and the interface to the ORB is via an Object Adapter.  The job of an Object Adapter is to provide a consistent ‘view’ of an object regardless of it’s implementation dynamics, e.g. the object may be implemented on a specific computer, or may be run on which ever machine is most appropriate at the time.

CORBA provides location transparency – i.e. the calling program does not need to know on which computer the object resides – and platform transparency – i.e. the calling program does no know that the object may be implemented using different programming languages or different operating systems.  It is important to note that CORBA is not a software product – it is a specification, intended to be implemented by commercial software houses.

CORBAtelecoms

CORBAtelecoms is the specification of an extension of CORBA to facilitate the development of telecommunications systems.  It has been proposed by “CORBAtel”, one of the OMG’s Special Interest Groups.

The specification defines ‘streams’.  “A stream is a set of flows of data between objects, where a flow is a continuous sequence of frames in a clearly identified direction.”  [OMG, 1998, p2-1]

A stream can be thought of as a continuous sequence of data items from a producer object (‘source’) to a consumer object (‘sink’).  Once a stream has been established, passing data items to the consumer object requires much less overhead than a conventional request/reply style interaction, e.g. calling an object’s method.

“Although any data type could flow between objects, this specification focuses on applications dealing with audio and video exchange with Quality of Service constraints”.  [OMG, 1998, p2-1].  Quality of Service (QoS) is defined by Oliver et al [Oliver,  1998, p1]  “Real-time multimedia applications are characterised by their sensitivity to timing and loss on an end to end basis.  There requirements are referred to as Quality of Service requirements and have to be met at both the application level and within the network.”

In CORBAtelecoms, streams are controlled by Stream Interface Control Objects.  These objects provide a set of services used to control the stream flow, and are invoked using the conventional request/reply style interaction.

Real-Time Special Interest Group

Another special interest group within the OMG is the Real-Time Special Interest Group (RTSIG).  “The RTSIG is concerned with issues of guaranteed performance of requests to distributed objects, embedded systems, and fault tolerance” [Vogel, p15]

The RTSIG declares its mission as “to work within the OMG to augment existing OMA Technology for the requirements of real-time systems, and to promote CORBA technology in the real-time marketplace.”  [OMG, RTSIG] and is liasing with the CORBAtel group in order to develop real-time specifications appropriate for telecommunications.

In Summary

The Common Object Request Broker Architecture (CORBA) is a specification which enables distributed software objects to intercommunicate.  The software objects may be written using any programming language which supports CORBA and may be executed on any machine with an IP address.

CORBA is a candidate for realising multimedia compositions because of the following features:

·        It is object orientated: these advantages have been described above.

·        It is distributed:  CORBA-compliant objects communicate with each other through an Object Request Broker (ORB) - a distributed communication layer.  This means that a multimedia composition may be realised using the resources of many computers.

·        It is architecture independent:  any machine architecture may participate in a CORBA-compliant distributed application.   This means that when realising a multimedia composition, the composer can make use of all existing computer-based resources and is not restricted to using a single type of machine

·        It is programming language independent:   CORBA-compliant remote object may be coded using any programming language which supports CORBA’s Interface Definition Language (IDL)[2].  This means that elements of the multimedia composition may be realised using the most appropriate programming language.

MIDI

It would be unfair to exclude MIDI as a candidate technology.  MIDI sequencers combined with MIDI output devices would realise event-based musical elements of a multimedia composition easily and efficiently.  The MIDI specification also includes synchronisation mechanisms to keep two distinct MIDI systems in perfect time.

However, MIDI fails when it comes to supplying devices with more than simple musical instructions and may create limitations based on it’s narrow bandwidth.  These two requirements are fundamental to realising multimedia compositions and for this reason, MIDI is not considered any further as the underlying architecture.

MIDI must still be used, however, to control MIDI devices and will act as a bridge between the chosen architecture and such devices.

Inferno

Inferno, produced by Lucent Technologies, is a "small, portable and secure network operating system designed to be independent and deliver services through a variety of existing and emerging networks, providing universal access to resources and information"  [Inferno,1:p1-1]

The network operating system kernel is very small and can be run on devices with less than 1MB of RAM.  Also, any application written for the Inferno system are platform independent, and can be run without modification on any system which supports Inferno.

Inferno can be run as a native operating system, with versions for Intel x86, AMD 29000, MIPS, Motorola 68030 and ARM processors.  Inferno can also be run in 'emulation mode' on Microsoft Windows 9x, Windows NT and Solaris SPARC Unix.

Inferno is shipped with a general purpose programming language called 'Limbo', a virtual machine called 'Dis' and a communications protocol called 'Styx'.  Also supplied are an application programming interface (API) and a set of platform-independent graphics libraries.

Inferno is innovative in that every resource associated with the system is represented as a file in a hierarchical filing system [Inferno,2:p5-3].  Programmers simply use conventional file input/output routines to interact with devices.  The hierarchy of filenames is stored in a namespace, which provides network transparency.

RM-ODP

The Reference Model of Open Distributed Processing (RM-ODP) is an programming reference model which clarifies object-orientated and distributed programming concepts.

“The Reference Model of Open Distributed Processing (RM-ODP), ITU-T Recommendations X.901 to X.904 | ISO/IEC 10746, is based on precise concepts derived from current distributed processing developments and, as far as possible, on the use of formal description techniques for specification of the architecture.” [ISO, 1]

The RM-ODP aims to allow “portability of applications across heterogeneous platforms, interworking between ODP systems, i.e. meaningful exchange of information and the convenient use of functionality throughout the distributed system, distribution transparency, i.e. hide the consequences of distribution from both the applications programmer and user.”  [DSTC, 1]

DCE

The Distributed Computing Environment (DCE) developed by the Open Group is suite of integrated software services that provides the distributed computing environment.

"The technology comprises software services that reside in top of the operating system and middleware that employs lower-level operating system and network resources" [DCE, 1996]

DCE services include Remote Procedure Call (RPC), which enables client-server program communication; a global directory service; a time service which synchronises system clocks throughout the network and a distributed filesystem.

DCOM

The Distributed Component Object Model (DCOM) is a component model developed by Microsoft.  Implementations are available for Windows 9x, NT as well as Solaris, Linux and HP/UX.

The first component model Microsoft developed, was on OLE (Object Linking and Embedding) which was 16-bit and supported early Visual Basic VBX components.

OLE was superseded by COM with the introduction of the Win32 platform provided by Windows 9x, NT4.  Today COM components are referred to as ‘ActiveX’ components, and may take the form of a user interface control, a Dynamically Linked Library (DLL) which is run in the same process as the calling program, or an executable which is run in a separate process and which relies on Inter-Process Communication (IPC) in order to communicate with other components.

A program that wishes to use a DCOM-enabled component must know it’s location, i.e. DCOM does not provide location-transparency.  Components communicate by using Remote Procedure Call (RPC) as specified by DCE.

Microsoft have announced enhancements to the DCOM service embodied in COM+, which is shipped with Windows NT 5.0

Existing Work

A number of research projects have already been carried out which make use of some of the candidate technologies in developing distributed multimedia applications.  This section is a summary of the work most closely related to this project.

Supporting Real-Time Multimedia Applications with Distributed Object Controlled Networks [Oliver, 1998]

A distributed videoconferencing demonstrator is implemented using Streams, QoS & CORBA.  Project ends with a critical analysis of CORBA

This project is concerned with streaming multimedia in real-time over internetworks and refers to the example of distributed videoconferencing over an ATM network.

It notes that all such applications require a minimum Quality of Service (QoS), and this QoS must be respected at the application, API, operating system, protocol stack and network levels.  The operating system, or Distributed Programming Environment (DPE) must therefore encompass network control and supply network-level control objects.

At the stage the project was under development, CORBA only really supported the ‘request’ or ‘request/reply’-style interactions, which are appropriate for control operations but particularly inappropriate for ‘stream’-style interactions.  With respect to multimedia streams, the project made the distinction between the flow of multimedia data, and the actual control of the network.

The project found CORBA to be helpful because of it's location and distribution transparency along with hardware, operating system and programming language transparency.  When streams were established, the initial binding costs using CORBA were high, but subsequent operation calls overheads are low.  The team found CORBA to be more mature and extensive than DCOM with more varied platform support.

However, the team found that ORBs are not very interoperable, and porting code between ORBs was difficult.  Furthermore, ORBs are not particularly fault-tolerant and there is a lack of connection monitoring and testing tools.  The location transparency provided by CORBA also proved to be problematic when wishing to address an explicit object.  CORBA was found to be too bulky for resource-constrained devices such as Personal Digital Assistants (PDAs).  Problems were also encountered when attempting to adapt the ORB to support real-time multimedia streams, or multiple communications protocols.

A Distributed Object Platform Infrastructure for Multimedia Applications [Coulson, 1998]

Author creates an API influenced by RM-ODP which enforces multimedia types and implements a transport layer, a stream-binding module with QoS and a threads package.

The goal of this project was to develop an Application Programming Interface (API) which “provides a low-level platform which offers generic middleware services useful for the implementation of a range of multimedia capable distributed object systems.” [Coulson, 1998, p1]

The project has been influenced by real-time/multimedia-orientated programming models put forward by the RM-ODP and focuses on providing a framework for the use of streams.

The project aims to provide application-specific multimedia types and communications protocols.  This project also aims to provide soft real-time support.

The product of the project was an middleware API called the General Object Platform Infrastructure (GOPI).  GOPI is decomposed into: a base module containing useful low-level programming constructs; a threads module containing primitives for thread synchronisation and timing; a message-based inter-thread communication service; a transport layer used by application-specific protocols; and a binding module allowing applications to establish multimedia streams according to particular QoS requirements.

The project has been influenced by, but does not adhere to, the CORBA specification.

MAESTRO

Distributed multimedia streaming API

The MAESTRO project is another API which attempts to give programmers distributed multimedia streaming functionality.  The objective of MAESTRO is “...designing and implementing services required to support various multimedia applications in a distributed processing environment. It entails analyzing the requirements for supporting multimedia applications in a distributed environment, designing the services and their APIs and then implementing in a testbed distributed computing environment.”  [Hong, 1998]

Unfortunately much of the detailed documentation is in Korean, but the general documentation shows that the project is coded using C and C++, and in many areas, the CORBA specification is respected.

The Gap

The existing work done using the candidate technologies does not solve the original problem of giving a composer control of a network of multimedia objects.  They fail in a number of areas, and leave a ‘gap’ which this project aims to investigate:

1.      The existing projects are concerned with streaming digital audio-visual material

The previous projects specialise in transmitting digital audio-visual material.  To be helpful in realising multimedia compositions, this should be extended to encompass any type of media, including MIDI & other control languages or streams of instructions composed for a specific multimedia program.

2.      The existing projects aren’t interested in generating media content

The other projects aim to facilitate communication applications such as teleconferencing.  It is apparent that these projects are not aimed at composers, who are interested in using the media types in an artistic manner.  Furthermore, the other projects either omit details about where the multimedia contents comes from, or just rely on the presence of standard audio-visual sources, such as a video camera or a microphone.

Work has yet to be done which is specifically aimed at composers and which opens up distributed computer-controlled media as an artistic resource.

3.      Give the multimedia objects some intelligence

The previous projects are concerned with networking strategies and give little attention to the actual multimedia input and output objects.  These tend to be ‘dumb’ devices which do nothing more than produce an audio/visual stream, or render an input stream.

Giving the multimedia objects a degree of intelligence would be very helpful for composers.  The objects would take care of their own internal workings, taking this load off the composer, and provide him/her with a simple and meaningful interface - the complexity of the multimedia system is delegated to participating computers.  Doing this would create a more ‘abstract’ view of the system, the benefits of which are described in the “object orientation” section (see above, page 7).

The objects may also be given other intelligence, such as how to handle invalid data appropriately or how to detect and communicate with other objects.

4.      Give the composer access to the multimedia object’s interface

The previous projects also assume that only applications will be interested in communicating with multimedia objects.  To use any of the previous projects, one must write, compile and execute an application.  This arrangement is inappropriate for composers, who may not be experienced programmers.  Furthermore, compositions created in this way are difficult to create and difficult to change.

A more appropriate solution would be to give the composer access to all the multimedia objects via their interface.  Knowing about these objects, the composer could then include them in his/her composition.

Objectives

Project Specialisation

In the first section, “context”, I have explained the problem to be solved.  I have also suggested a number of candidate technologies which will hopefully provide a solution to the problem, and I have then explained what has already been done with these candidate technologies.  I finished by showing where the existing projects have failed with respect to the original problem of giving a composer control of a distributed network of multimedia objects.

However, to fill ‘gap’, which has been missed by existing work, is too substantial a task for this MSc solo project, so I shall now identify a few important aspects of this gap which I intend to pursue.

1.      Encapsulation of multimedia objects

One of the principle aims of this project is to represent musical/multimedia output objects as software objects.  This process of abstraction reduces the complexity of the system and presents the composer with a meaningful interface, as described in “object orientation” on page 7.

2.      Use of output objects only

However, this project will only be concerned with controlling multimedia output objects.  This will limit the composer to realising a multimedia composition.  The composer will not be able to create pieces of work which are interactive or which are performed live.

3.      Objects are distributed

The software objects which encapsulate multimedia output objects will be distributed over a network of computers.  This is due to the practical reason that many of the multimedia resources we shall be using are tied to their host machine and, as such, are already distributed - e.g. computer screens or audio outputs.  Distributing the software objects also has the advantage of delegating the complexity and processing requirements of the overall system to all participating computers.

4.      The composer writes a score

Once software versions of multimedia output objects are in place, the next step is for the composer to write a score.  The score, in effect, will be a timed list of remote function calls.  Each entry will pass a number of parameters to a remote multimedia output object, which will instantly act upon it.  The score could be thought of as a more general version of a MIDI event list.

Investigate the candidate technologies

I will accomplish the above aims by investigating candidate technologies, identifying the most appropriate technology and then building a system which demonstrates all of the above.  However, time and resource constraints do not permit investigation of every single candidate technology.  As such, I have chosen to investigate only Java and CORBA.

I have chosen Java mainly because it is freely available for many platforms, well supported and well documented.  Java is also a primary candidate for the reasons summarised on page 13.  I have chosen CORBA because of the reasons summarised on page 16, but also because Java integrates very well with CORBA. The IDL to Java mapping is natural, and Java is shipped complete with a CORBA compliant ORB and related tools.

As I have never programmed in Java before, I will begin by learning basic Java.  I will then investigate Java’s multimedia capabilities - 2D graphics and MIDI synthesis.  Following this, I will investigate networking in Java, beginning with sockets and then moving onto Remote Method Invocation (RMI).

I will then investigate Java’s Native Interface (JNI) which allows a Java program to gain access to executable code native to the host machine.  Finally, I will investigate CORBA as implemented by Java.

Identify the best candidate technology

The investigation is intended to point out the most promising candidate technology.  This technology will:

·        Be easy to use:  it will hide complexity and provide the composer with a meaningful interface.

·        Respond quickly enough:  an event in the score will be realised by the output multimedia device quickly enough to constitute real-time.  It will be noted that this criteria also depends upon other factors including network speed and output object response time.

·        Be capable enough: it allows control of many media types and permit future expansion.

Build a demonstrator

The incentive behind this project is to make practical use of the knowledge gained and actually construct a working system.  Therefore I intend to build a demonstrator application which will be a distributed realisation of a multimedia composition.

At present, the exact nature of the composition is unknown - it will rely on the knowledge and recommendations presented by this project, and it will be conceptualised and realised after this report is finished.

Potential compositions include conventional blues/jazz/dance/techno music realised with accompanying graphics and lighting effects, or an electroacousic piece which explores realising different ‘views’ of elements of the piece simultaneously using multiple media.

It will be implemented using the most appropriate technologies, as identified in the project, and will demonstrate centralised control of a set of distributed, heterogeneous multimedia output objects.


To aid in understanding the nature of the demonstrator system, assume, for the time being, that the hardware shown in figure 1 will be used:

Figure 1 - The demonstrator system

Design

This section describes the basic client-server design of the demonstrator system, which is shown in figure 2.



Figure 2 - The basic architecture design

On the server side, remote objects are constructed which, when executed, initialise themselves and then wait for incoming instructions.  They are ‘server’ objects, because they do nothing except serve incoming requests.  All server objects are autonomous and are not aware of the existence of other server objects.  All server objects must be executed, and in a state ready to accept incoming calls, before execution of the client program.

The client program is an application which sends instructions to the server objects at specific points in time.  In this demonstrator system, the client program is little more than a ‘script’ or ‘event-list’.  The following shows intended client pseudocode:

begin

establish connection to remote server #1
establish connection to remote server #2
...
establish connection to remote server #n


send instruction to a server
send instruction to a server
send instruction to a server

pause for a number of milliseconds

send instruction to a server

pause for a number of milliseconds

...

send instruction to a server


release connection to remote server #1
release connection to remote server #2
...
release connection to remote server #n

end

The client program sends a number of instructions to the appropriate servers and then pauses for a short time before sending further instructions.  Sets of consecutive instructions, found between the pauses, are intended to be sent with minimal delays in-between them, so they can be thought of as happening at the same time.  The order in which they appear in the code is still respected, however.

Implementation

Learning Java 1.2

All the code in this project is written in Java 1.2 (aka “Java 2”).  Java 1.2 is freely available for the PC and many other platforms.  The rest of this section is for readers not familiar with programming in Java.  Experienced programmers can feel free to skip to the section “How to use Graphics2D”.

Many resources for learning Java already exist, and it is beyond the scope of this report to actually teach the Java language. Sun’s online Java Tutorial at http://java.sun.com/docs/books/tutorial (or \Docs\tutorial\index.html on the included CD-ROM) is, perhaps, the most convenient way of learning Java. It is an excellent resource and covers all aspects of the Java programming language. It is recommended that readers work through the “Getting Started” and “Learning the Java Language” sections before proceeding with this report.  These two sections will explain how to download and install JDK1.2 (the compiler, interpreter & build tools), get the first program up and running, and introduce the reader to object-orientated concepts in Java.  (JDK 1.2 along with other useful software tools have been provided in \Software\ on the included CD-ROM.)

Sun’s Java Tutorial does not, however, go into very much detail when covering classpaths and packages, which are used when using Java language ‘extensions’ or third party Java code.  The rules concerning these topics can be quite confusing and the source of many errors.  The rest of this chapter explains what they are and how to use them.

Classpaths

Classpaths or packages are required when including any extra source code or classes.  When you write a new class, it is saved with the .java extension, e.g. MyClass.java. In another program, e.g. MyProg.java, you might want to create an instance of MyClass:

MyClass c= new MyClass();

When you use javac compile MyProg.java the compiler looks in the current directory for MyClass.class - a compiled version of MyClass.java. In this example, it does not find MyClass.class, so it looks for MyClass.java.  The compiler finds MyClass.java, and compiles it.  It then compiles MyProg.java.

If the compiler had not found MyClass.java in the current directory, then it would look for MyClass.class in the core Java classes library. If the compiler could still not find MyClass.class, then it would report an error and exit.

You can override this default behaviour by making use of the ‘classpath’.  A classpath is a list of directories in which the compiler should search for .class or .java files.  You can set the classpath as either a command line option, or as an environment variable - see table 1 for details.  The class path should be composed of the following paths: ‘.’ (the current directory), the Java root directory (e.g. c:\java or /usr/java), then any extra directories.

The classpath applies in exactly the same way when executing Java programs.  The Java runtime system takes a command-line argument of the same form, or uses the classpath environment variable.  The only difference is that the Java runtime system does not look for or compile .java files.



Windows 9x/NT - temporary environment variable

In an MS DOS/Command-Prompt window, type: set classpath=.;c:\java;c:\java\my_classes

Windows 9x - permanent environment variable

Edit c:\autoexec.bat and add: set classpath=.;c:\java;c:\java\my_classes Reboot the system.

Windows NT - permanent environment variable

Click Start ŕ Settings ŕ Control Panel ŕ System ŕ Environment.  Set variable to classpath and value to .;c:\java;c:\java\my_classes  Click [Set] ŕ [OK]

Open a new command prompt window to see the changes. Changes will not made to existing command prompt windows.

Windows 9x/NT - command line option

javac -classpath .;c:\java;c:\java\my_classes my_file.java

UNIX - temporary environment variable

Type: setenv classpath=.:/usr/java:~/java/my_classes

UNIX - permanent environment variable

Edit ~/.cshrc and add: setenv classpath=.:/usr/java:~/java/my_classes

Open a new shell window to see the changes. Changes will not made to existing shell windows.

UNIX - command line option

javac -classpath .:/usr/java:~/java/my_classes my_file.java

Table 1 - How to set the classpath.  Please replace the java path and the my_classes path with settings which match your own system.

Packages

A package is a set of additional Java classes the a programmer may wish to make use of.  It is essentially the same concept as a 'library' in the C programming language - e.g. <stdio.h>.  In the same way that <stdio.h> contains many functions, a package in Java can contain many classes.

However, in Java, each compiled class is held in a separate .class file.  The source files are also held in separate .java files (in general, but not always).  This means that a package may contain many files.

So - a Java source file qualifies as being part of the hypothetical package, “MyPackage” if:

1)      it contains the line: package MyPackage;

2)      it is in a directory named MyPackage

 

A package is available if:

1)      the directory containing MyPackage is in the classpath;

2)      MyPackage is not part of the current classpath.

 

It is the directory containing MyPackage that must be in the classpath.  If the MyPackage directory itself forms part of the classpath, the compiler may fail with the error:

File d:\java\MyPackage\MyClass.class does not contain type MyClass as expected, but type MyPackage.MyClass. Please remove the file, or make sure it appears in the correct subdirectory of the class path.

Graphics 2D

What is Graphics 2D?

The Java 2D Graphics API (aka “Graphics 2D”) is the 2D graphics library which is shipped with Java 1.2.  Graphics 2D provides the programmer with facilities to create and manipulate line art (curves, rectangles, ellipses), text and images.

Graphics2D is useful for this project because we will be using it to provide the graphics capabilities of a graphics server.

For more information, please refer to http://java.sun.com/docs/books/tutorial/2d/

Example 1

How to run the example

Open a DOS/UNIX window and navigate your way to the directory /Examples/Graphics2D/HelloWorld/ on the included CD-ROM.

Type: java Main

After a few moments, the following window should pop up:

Explanation

The example code comes in two parts, the main program, and the component which actually does the drawing.  I shall first explain the main program:

The Main Program

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;

These there lines import the packages required to use the Java 2D API.

public class Main
{
      public static void main(String argv[])
      {    
            JFrame wnd= new JFrame("Java Graphics2D Example");

A JFrame is a graphics window in the host environment.  This line creates a new JFrame with the title “Java Graphics2D Example”.

            wnd.addWindowListener(new WindowAdapter() {public void windowClosing(WindowEvent e) {System.exit(0);}});

When we’ve finished looking at our JFrame, we’ll click the window’s close button and the program will quit.  We need to explicitly state this in the code which is what the above line does - although the process is rather involved.

The above line extends a WindowAdapter class as a new class with no name - an anonymous class - which is the part beginning with {public void...[3]  The new class overrides the windowClosing method, with the code System.exit(0).   The new class is registered with the JFrame by using the addWindowListener method.

If this seems too complicated, just assume that when the user clicks close, the program executes System.exit(0).

            wnd.setSize(new Dimension(330, 200));

This line sets the size of the JFrame to be 330 pixels wide by 200 pixels high.

            wnd.getContentPane().setBackground(Color.black);

The part of the JFrame we’re interested in the ‘content pane’, where all the graphics activity takes place.  This line sets the background colour of the content pane to be black.

            MyGraphicComponent m= new MyGraphicComponent();

We create graphics by adding a ‘paintable’ component to the JFrame’s content pane.  A component is a class which extends the Component class.  All components are defined to have a paint method which is where the graphics code is placed.

The above line creates a new instance of our “Hello World!” graphics component.

            wnd.getContentPane().add(m);

And this line adds our graphics component to the JFrame’s content pane.

            wnd.show();
    }
}

And finally, we show our JFrame, which makes it visible on the user’s screen.  We may also now interact with the JFrame.

The Graphics Component

The second part of the program is the graphics component:

import java.awt.*;

Again, we need to import the packages so the compiler understands what we’re talking about.

public class MyGraphicComponent extends Component
{

The MyGraphicComponent class must extend the Component class.  Only Components may be added to a JFrame’s content pane.

      public void paint(Graphics g)
      {

The only method we’re implementing is the paint method.  The paint method is called by the main program, which supplies it with a graphics context, g.  A graphics context can be thought of as an object with graphical methods.  See \Docs\jdk\api\java\awt\Graphics2D.html on the included CD-ROM for a list of methods supported by Graphics2D.

            Graphics2D g2= (Graphics2D) g;

The graphics context we are supplied with is a AWT graphics context.  AWT is the graphics library which was supplied with earlier version of Java.  Although we could use this context to produce graphics, the Graphics2D provides a context which gives greater functionality.  The above line converts the AWT graphics context, g, to a Graphics2D context, g2.

            g2.setFont(new Font("Helvetica", Font.BOLD, 42));
            g2.setPaint(Color.green);
            g2.drawString("Hello World", 30, 100);

We now begin to use the Graphics2D context.  The first method we call sets the current font to Helvetica, bold type and 42 points in size.  We then set the current colour to green and draw the string “Hello World” to position 30, 100 pixels from the top-left of the JFrame.

            g2.setPaint(Color.pink);
            g2.fillRect(270, 40, 10, 40);
            g2.fillOval(270, 90, 10, 10);
      }
}

We finish by setting the current colour to pink, and drawing a rectangle and a circle.

Example 2

How to run the example

Open a DOS/UNIX window and navigate your way to the directory /Examples/Graphics2D/Flower/ on the included CD-ROM.

Type: java Main

After a few moments, the screen should change to the following:

 

To close the program, click the screen and press Alt-F4 (for the PC) or CTRL-C (for UNIX machines).

Explanation

This example demonstrates full-screen graphics, programmatic control of a graphics component, simple animation and device-independent dimensions which are expressed as a percentage of the screen size.

As in the Hello World example, this program comes in two parts - the main program and the graphical component.  I shall now explain new features in the main part:

The Main Program

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;

public class Main
{
      public static void main(String argv[])
      {
            JWindow wnd= new JWindow();

This time we’re using a JWindow instead of a JFrame.  A JWindow has no border - that is, no toolbar, title or edge.  This makes it a suitable candidate for full-screen graphics.

            wnd.addWindowListener(new WindowAdapter() {public void windowClosing(WindowEvent e) {System.exit(0);}});
            wnd.setSize(wnd.getToolkit().getScreenSize());

Instead of setting the size to 330 by 200 pixels as in the Hello World example, we now wish to set the size of the JWindow to the size of the user’s screen.  How do we know what the size of the user’s screen is?  We ask the JWindow’s toolkit.

The toolkit object contains many useful properties about the host operating environment - see \Docs\jdk\api\java\awt\Toolkit.html on the included CD-ROM for more details.  The getScreenSize method returns a Dimension object - the same type of object that the JWindow’s setSize method requires.

wnd.setBackground(Color.black);
wnd.getContentPane().setBackground(Color.black);

The JWindow has a background and the JWindow’s content pane also has a background.  The default background colour is grey.  In the Hello World example, we only bothered to reset the content pane’s background, because this appears in front of the window’s background.  In this example, however, we redraw the whole screen every 400 milliseconds.  This means drawing the window background, the content pane background and then the graphics components.

If we left the window background as grey, there would be a ‘flash’ of grey appearing very quickly before it is overwritten with the black background of the content pane.  For this reason, we set the background of both the JWindow and the content pane to be black.

            FlowerComponent m= new FlowerComponent();
            m.setSize(10);

We then create a new graphic component and set it’s size property to 10.  The graphics component used in this example contains a setSize method, which I will explain in a moment.

            wnd.getContentPane().add(m);
            wnd.show();

            while(true)
            {
                  for(int i= 10; i<=100; i= i+10)
                  {
                        try {Thread.sleep(400);}
                        catch (InterruptedException e) {}

Thread.sleep(400)causes the program to pause execution for 400 milliseconds.  The catch line does nothing, but is a required because Thread.sleep may throw an InterruptedException.

                        m.setSize(i);
                        wnd.repaint();
                  }
            }
      }
}

wnd.repaint() redraws the screen to reflect the changes we have made to the size of the graphics component.

The Graphics Component

The second part of the program is the graphics component:

import java.awt.*;

public class FlowerComponent extends Component
{
      private int size= 100;

This version of a graphics component contains a private member variable, which is used through the paint method to determine the size of the overall graphic.

      public void paint(Graphics g)
      {
            Graphics2D g2= (Graphics2D) g;
            int xcen= this.getSize().width/2, ycen= this.getSize().height/2;

Another feature of this component is that the flower is always drawn at the centre of the screen, regardless of the screen dimensions.  getSize (above) refers to the size of the FlowerComponent, which is the size of our JWindow.  xcen and ycen, therefore, refer to the point which represents the centre of the screen.

            g2.setPaint(Color.white);
            for(double theta= 0; theta<=2*Math.PI; theta= theta+((2*Math.PI)/7))
            {
                  g2.fillOval((int) (xcen + (size/2)*Math.cos(theta) - size/4), (int) (ycen + (size/2)*Math.sin(theta) - size/4), size/2, size/2);
            }

This section uses trigonometry to draw seven white circles around the centre of the screen.  This is made a little complicated, because the fillOval method refers to the top left point of a circle, not the centre of the circle.  Note that (int) is used to explicitly convert the first two parameters of fillOval from a double to an int.

            g2.setPaint(Color.yellow);
            g2.fillOval(xcen-(size/2), ycen-(size/2), size, size);
      }

And this bit draws a single yellow circle representing the middle of the flower.

      public void setSize(int percentage)
      {
            if(percentage>100) percentage= 100;
            if(percentage<1) percentage= 1;
            this.size= (int) ((double) (this.getSize().height/2) * ((double) (percentage/100.0)));
      }
}

The final section of code is the setSize method.  This method takes a parameter between 1 and 100 which represents the size of the flower.  Again, this function works independently of screen dimensions - a parameter value of 1 represents the smallest flower and 100 represents a flower the height of the screen.

Appraisal

Although the concepts of windows, content panes and components may be a little complicated, the Java 2D API is relatively easy to use and has a very rich set of graphics functions.

However, through practical usage, the Java 2D API isn’t particularly good for animations - which may be sluggish and may flicker.  This may perhaps be resolved by using a double buffering algorithm, whereby the new image is rendered off-screen and replaces the visible screen as the final step.

The JWindow or JFrame classes may also be a performance bottleneck.  Both classes model the display as a collection of several layers, all of which are rendered when displaying graphics.  This extra functionality seems to be provided at the expense of system performance.  Investigating more direct control of the display may improve the quality of animations.

Further Information

For further information on how to program using the Java 2D API, please refer to:

·        The Java Tutorial (\Docs\tutorial\2d\index.html on the included CD-ROM)

·        The Java 2D API Documentation (\Docs\jdk\guide\2d\index.html on the included CD-ROM)

JavaSound

Unfortunately, the JavaSound API is only available for the PC at present.  This section is for PC users only.

What is JavaSound?

JavaSound is an extension to the Java programming language which provides a rich set of sound-related functions, including digital audio input/output and General MIDI synthesis.  The MIDI synthesis is performed using software wavetable synthesis and supports the use of soundbanks - which are complete instrument definitions including any source samples.

JavaSound is useful for this project because we’ll be using it to provide audio functionality to any multimedia compositions.  In particular, we will be using the MIDI synthesiser to build a remote MIDI server program.

How to run the example

What to type

Locate the file \Software\javasound0_8_6ea.zip on the included CD-ROM.  Unzip this file to a temporary location and navigate your way to \javasound086\lib.

Copy the file sound.jar to the java\lib directory of your Java installation.  Copy the files js.dll, libjs.so and soundbank.gm to the java\bin directory of your Java installation.  Finally (and optionally), copy the directory \javasound086\doc\ to java\doc\JavaSound\ in your Java installation.

Add java\lib\sound.jar to your classpath using one of the methods described in on page 34.  Navigate your way to the directory Examples\JavaSound on the included CD-ROM and type java Main.

What to expect

Assuming that the classpaths and system paths are correct and the audio output of the computer is active, you should hear the machine play a scale from Middle C to C the octave above.

Explanation

import javax.media.sound.midi.*;

This line imports the MIDI components of the JavaSound API.

public class Main
{
      public static void main(String[] args)
      {
            Synthesizer synth= null;
            MidiChannel midiChannels[];

            synth= MidiSystem.getSynthesizer(null);
            if(synth==null)
            {
                  System.out.println("Could not get default synthesizer");
                  System.exit(1);
            }

The JavaSound API models all MIDI devices as being subclasses of the MidiDevice class.  Furthermore, MidiDevices may be Receivers or Transmitters.  The Synthesizer class is a Receiver which generates sound.  The above section attempts to obtain a reference to the JavaSound default Synthesizer.

            midiChannels= synth.getChannels();

A Synthesizer contains methods which returns descriptive information about it’s abilities or which control global settings, such as which soundbank to use.  To actually generate sounds, we need access to the Synthesizer’s MIDI channels.  The above line fills the midiChannels array with references to the Synthesizer’s 16 MIDI channels.

            midiChannels[0].programChange(0);

We are only using one MIDI channel - channel number 0.  The first instruction we send is a program (instrument) change.  This method call sets MIDI channel 0 to program number 0, which should be “Grand Piano”.

            for(int i= 60; i<=72; i++)
            {
                  midiChannels[channelNumber].noteOn(i, 64);
                  try {Thread.sleep(500);}
                  catch (InterruptedException e) {}
                  midiChannels[channelNumber].noteOff(i, 64);
            }

The next section loops i through 60 to 72 which represent the notes middle C to C one octave above.  It calls the noteOn method, with note number i, and velocity 64.  It then waits for half a second and calls the noteOff method with note number i and velocity 64.  The noteOff method must be called to switch off notes which would otherwise sound indefinitely.

            System.exit(1);
      }
}

Finally, we manually exit the program.  This is required because the MIDI synthesiser, and thus our program, remain active even though the code has finished.

Appraisal

The JavaSound MIDI synthesiser is easy-to-use and platform independent.

However, because the sounds are rendered in software, the system requires a lot of CPU overhead and may prohibit the using other multimedia elements at the same time.  The MIDI sounds are not perfect quality and the system only supports 24 simultaneous notes.

Further Information

For further information on how to program using the JavaSound API, please refer to the JavaSound documentation (\Docs\JavaSound\index.html on the included CD-ROM).

Sockets

What are Sockets?

A socket is a communications channel between two computers.  The socket has a start point (the source)  and an end point (the sink).

The start point and end points are associated with their host computer’s address and port numbers, respectively.

A port is a communications channel interface, usually supported by the computer’s operating system.

The 2 ports used by a socket may exist on two distinct computers connected by a common network, or the two ports may exist on the same computer.  The source and sink port numbers are chosen independently and need not be the same value.

A socket, therefore, has four values associated with it - a source (machine, port) and a sink (machine, port).

The only type of data that may be written to or read from a socket are bytes.  However, Java provides many utilities which convert other data types or objects to/from a sequence of bytes.

We will be using sockets to build a client-server application.  The client program (machine A) is a timed list of multimedia instructions (see page 27).  The server program (machine B) listens to the socket for an instruction and performs some sort of multimedia event upon it’s arrival.

How to run the example

Open a command line shell, and navigate your way to the directory /Examples/Sockets/ on the included CD-ROM.  Type java Server  This is the “server” program - you should see the following:

Server listening on local port 4440...

On another machine (which can be the same machine), open another command line shell, and navigate your way to the directory /Examples/Sockets/  Type java Client  You should see the following:

Enter the name of the server computer:

Here, type the name of the computer on which the server program is running.  If all goes well, you should see something like the following:

Socket established from local port localhost:1040 to remote port adam:4440
Client ready.

The server window should now also have the message:

Socket established from local port localhost:4440 to remote port localhost:1040

If the client does not connect, the program will quit with an error message.  If this is the case, then try the following steps:

·        Make sure the server program is running, and displaying the message “Server listening on local port 4440...

·        Try specifying the server machine using it’s IP address instead of it’s name.

Please note: The client and server programs are both very simple examples, and do not possess any fault tolerance.  Both programs will exit in the event of an error, or if the user quits the client program.  For this reason, you must remember to make sure the server program is running before trying to run the client.

If the connection was successful, then type a string into the client window and press enter.  The string should be re-printed in the remote server’s window.

Explanation

The socket examples is composed of two programs - a client and a server.  The server is the receiving computer, and must be started before the client program.  The server listing is as follows:

Server Listing

import java.net.*;
import java.io.*;

java.net.* contains the essential socket classes and java.io.* contain the BufferedReader and InputStreamReader classes which allow us to read strings from a stream.  See \Docs\tutorial\essential\io\processing.html on the included CD-ROM for more information on readers.

public class Server
{
    public static void main(String argv[]) throws IOException
    {

The socket and reader classes throw many checked exceptions[4].  Rather than attempting to catch every single one of them, we declare that main throws all the exceptions - which will just exit with an error should any of them happen.  (Catching individual exceptions would make the program much more fault-tolerant, however.)

int localPort= 4440;

The above line sets the local port, the port on which the server program will listen for incoming data.  Many port numbers have special meanings and should not be used.  In this example, the local port is 4440 - an unused port open for use by programmers.

      ServerSocket serverSocket= new ServerSocket(localPort);
      System.out.println("Server listening on local port " + serverSocket.getLocalPort() + "...");

This section establishes a new server socket on the port we specified.  A server socket is used to set up a connection, but is not used for any actual data transfer. At this point in the program, we have a reference to port 4440, but no communication is taking place.

Socket incomingSocket= serverSocket.accept();

This line will cause the server port to become active, and the program will hang until an incoming connection request on port 4440 is received.  The object incomingSocket refers to the socket just established by the client machine.  The server program will now listen to the client’s socket for incoming data.

BufferedReader in= new BufferedReader(
               new InputStreamReader(
             incomingSocket.getInputStream()));

By nesting the functionality of three classes, we are able to extract strings from the socket.  The incomingSocket returns an InputStream, which is a stream of bytes.  InputStreamReader convert this stream of bytes into a stream of integers which represent characters.  Finally, a BufferedReader converts a stream of characters into a series of strings.

String inputLine;

while((inputLine= in.readLine())!=null)
{
      System.out.println(inputLine.toString());
}

This section waits for a string to arrive in the socket.  readLine is a method provided by BufferedReader and will hang until a complete String arrives in the socket.  When this string has arrived, we simply print it to the screen.

If readLine returns a null, this informs us that the socket has been closed by the client, and we jump out of the loop in order to exit the program.  (This isn’t very fault-tolerant - at this point, most programs would go back to listening for incoming connections or perform some other action.)

        in.close();
       
incomingSocket.close();
        serverSocket.close();
    }
}

Before we exit the program, we tidy up by closing all the resources we opened.  Because of the order of dependency, we close our resources in the order last-opened, first-closed.

Client Listing

import java.io.*;
import java.net.*;

java.io.* contain the classes we need to get input from the keyboard.

class Client
{
      public static void main(String[] args) throws IOException, UnknownHostException
      {

String inStr;
BufferedReader stdIn= new BufferedReader(
                      new InputStreamReader(System.in));
System.out.print("Enter the name of the server computer: ");
String remoteComputer= stdIn.readLine();

We set up a BufferedReader to read strings from the keyboard, the first of which is the name of the server computer. See \Docs\tutorial\essential\io\processing.html on the included CD-ROM for more information on readers.

The server computer is the machine on which the server portion of this example is currently running.

int remotePort= 4440;
Socket clientSocket= new Socket(remoteComputer, remotePort);

Port 4440 is the number of the port, on which, the remote server is listening.  In the above code, the client is requesting to connect to port 4440 on the remote machine.  If this code executes successfully, the clientSocket is now connected.

            PrintWriter out= new PrintWriter(clientSocket.getOutputStream(), true);

clientSocket’s output stream requires bytes.  The above code uses the java.io.PrintWriter class to convert Strings to bytes.  Now, any Strings written to the variable out will be written to the socket.

            String inStr;
            BufferedReader stdIn= new BufferedReader(new InputStreamReader(System.in));

            while((inStr= stdIn.readLine())!=null)
            {
                  out.println(inStr);
            }

This section reads a line from the keyboard and writes it to the socket.  If the user pressed CTRL-C, then inStr is null and the control jumps out of the while loop without writing to the socket.

            out.close();
            stdIn.close();
            clientSocket.close();
      }
}

Finally we tidy up by closing all opened resources in the order last opened, first closed.

Running the Example

The above code has been simplified slightly - the code included on the CD-ROM also contains statements which print out details of the socket which has just been established.

Note that both programs display the same information about the socket, but from their own ‘perspective’ - the client’s local port is the server’s remote port and vice versa.

Appraisal

Sockets are a straightforward means of communication between two computers.  Their drawbacks lie in the fact that they are not particularly ‘natural’ to use.  Any communication between two machines must be manually converted into bytes, and all client programs must know in advance the port number, on which, the server is listening.

Further Information

For further information on how to program using sockets, please refer to:

·        The Java Tutorial (\Docs\tutorial\networking\sockets\index.html on the included CD-ROM)

·        The Java API Documentation (\Docs\jdk\guide\net\index.html on the included CD-ROM)

Example: ThreeBlindMice

What is this example?

This example is the first actual realisation of a multimedia composition.  The composition is very simple - a single piano sound plays the melody to “Three Blind Mice”, while a full-screen graphics window displays the words synchronously.

The example uses a MidiServer and a TextServer.  A MidiServer is a JavaSound MIDI synthesiser which receives instructions via a socket.  A TextServer is a program which uses the Java 2D API.  The TextServer’s TextComponent displays a string to the centre of the screen.  The string is public and is set by the TextServer program, which  receives instructions from a socket.

A client program cycles through an array of text and musical data, writing strings to the socket connected to the TextServer and writing Note objects to the socket connected to the MidiServer.   A Note object is a simple structure consisting of a note number, velocity and duration.

This example demonstrates writing objects to a socket as well as real-time media control using sockets.

How to run the example

What to type

Make sure that the JavaSound extensions are installed on the machine that will be the MIDI synthesiser, as described on page 47 of this report.  On this machine, open a command-line window and navigate your way to \Examples\ThreeBlindMice on the included CD-ROM.  Type java MidiServer.  The screen should display the following:

MIDI synthesiser initialised
MIDI server listening on local port 4445...

Move to the machine that will be the TextServer (which may be the same machine).  Navigate your way to \Examples\ThreeBlindMice on the included CD-ROM, and type java TextServer  The screen should display the following:

Initialising Text server...

After a moment, the full-screen text server window will appear, and display “Ready...” in the centre of the screen.  Under Windows 9x/NT, some windows may still be visible.  Click the black backdrop of the TextServer full-screen graphics window to move it in front of the other windows.  If the taskbar is still visible, try pressing CTRL-ESC and then switching to the TextServer program.

Finally, move to the machine that will be the client (which may be the same machine).  Navigate your way to \Examples\ThreeBlindMice on the included CD-ROM, and type java Client  The client program will ask you:

Please enter the name of the MIDI server computer:

Here, type the name or IP address of the machine on which the MIDI server is running.  For troubleshooting, please see the tips on page 51.

If the client successfully connects to the MIDI server, the MIDI server should say something like the following:

Socket established from local port localhost:1040 to remote port adam:4440

The client program will then ask you for the address of the text server, for which you should use the same procedure as above.

Finally, the client program will ‘ping’ the devices and wait for you to press the enter key to begin playing the composition.  Please wait until both servers report that a socket has been established (about 10 seconds) before pressing enter.

What to expect

The MIDI server will start playing the melody to Three Blind Mice, while the text server displays the lyrics in synchronicity.

Explanation

This example contains five source files:

·        MidiServer.java  The MIDI server program

·        Note.java  An object which describes a note

·        TextServer.java  The text server main code

·        TextComponent.java  The graphics component for the text server

·        Client.java  The program which remotely controls the two servers

Most of the concepts in these programs have already been explained in the previous sections.  Therefore I shall only explain new concepts, beginning with the MIDI server.

Midi Server

import javax.media.sound.midi.*;
import java.net.*;
import java.io.*;

public class MidiServer
{
      public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException
      {
            int localPort= 4445;

The MIDI server listens on port 4445 and the text server listens on port 4444.  This allows us to have both server programs running on the same machine, should we need to.

            // Initialise MIDI synthesiser
            Synthesizer synth= MidiSystem.getSynthesizer(null);
            if(synth==null) { System.out.println("Could not get default synthesizer"); System.exit(1); }
            MidiChannel[] midiChannels= synth.getChannels();
            System.out.println("MIDI synthesiser initialised");

            // Initialise socket
            ServerSocket serverSocket= new ServerSocket(localPort);
            System.out.println("MIDI server listening on local port " + serverSocket.getLocalPort() + "...");
        Socket incomingSocket= serverSocket.accept();
            ObjectInputStream in= new ObjectInputStream(incomingSocket.getInputStream());

In this example, we are not reading strings from the socket, we are reading entire objects.  The client program converts a Note object into a stream of bytes and writes it to the socket.  An ObjectInputStream (above) converts the stream of bytes back into an object.

            // Play incoming Notes
            Note inputNote= new Note();
            while(inputNote.noteNumber!=-1)
            {
                  try { inputNote= (Note) in.readObject(); }
                  catch(EOFException e) {inputNote.noteNumber=-1;}
                  if(inputNote.noteNumber!=-1)
                  {
                        midiChannels[0].noteOn(inputNote.noteNumber, inputNote.velocity);
                        try {Thread.sleep(inputNote.duration);}  catch(InterruptedException e) {}
                        midiChannels[0].noteOff(inputNote.noteNumber, 64);
                  }
            }

The above loop begins by reading a Note from the socket.  If an EOFException is thrown, then this means that the client program has disconnected or an error has occurred in transmission.  In this instance, our example just sets inputNote.noteNumber to -1 in order to exit the program.  If this has not happened, then we switch the note on, pause for the desired number of milliseconds and then switch the note off.  The loop then waits for another note to arrive.

// Quit the program
in.close();
incomingSocket.close();
serverSocket.close();
System.exit(0);

}
}

Note

public class Note implements java.io.Serializable
{

Note objects will be sent down a wire to another computer.  In order for this to happen, they need to be converted into a list (or ‘stream’) of bytes.  We need to declare that the Note class implements the Serializable interface, which is what the above line does.  Because our note class only consists of a few member variables, serialisation is performed automatically.

      public byte noteNumber= 60, velocity= 64;
      public long duration= 500;

Our note class consists of the above variables.  The duration is measured in milliseconds.

      public Note() {}

Programs may instantiate a Note by using this default constructor.  The new note will have parameters which match the variable initialisations above.

      public Note(byte noteNumber, byte velocity, long duration)
      {
            if(noteNumber<0) this.noteNumber= 0; else this.noteNumber= noteNumber;
            if(velocity<0) this.velocity= 0; else this.velocity= velocity;
            if(duration<0) this.duration= 0; else this.duration= duration;
      }

This constructor allows a programmer to construct a new Note with specific values.  The values are validated before assigning them.

      public String toString()
      {
            return "noteNumber= " + noteNumber + ", velocity= " + velocity + ", duration= " + duration;
      }

The toString function is a de-bugging function that will return the Note object as a string.

}

Text Server

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.io.*;
import java.net.*;

public class TextServer
{
      public static void main(String[] args) throws IOException
      {
            int localPort= 4444;

            // Initialise TextServer
            System.out.println("Initialising Text server...");
            JWindow wnd= new JWindow();
            wnd.addWindowListener(new WindowAdapter() {public void windowClosing(WindowEvent e) {System.exit(0);}});
            wnd.setBackground(Color.black);
            wnd.getContentPane().setBackground(Color.black);
            wnd.setSize(wnd.getToolkit().getScreenSize());
            TextComponent t= new TextComponent();
            wnd.getContentPane().add(t);
            wnd.show();

            // Initialise socket
            ServerSocket serverSocket= new ServerSocket(localPort);
            Socket incomingSocket= serverSocket.accept();
            BufferedReader in= new BufferedReader(new InputStreamReader(incomingSocket.getInputStream()));
            String inputLine;
            while((inputLine= in.readLine())!=null)
            {
                  t.s= inputLine.toString();
                  wnd.repaint();
            }

            in.close();
            incomingSocket.close();
            serverSocket.close();
            System.exit(0);
      }
}

Text Component

import java.awt.*;

public class TextComponent extends Component
{
      public String s= "Ready...";
      private int fontSize= 48;

      public void paint(Graphics g)
      {
            Graphics2D g2= (Graphics2D) g;
            g2.setFont(new Font("Serif", Font.BOLD, fontSize));
            g2.setPaint(Color.orange);
            g2.drawString(s, (getSize().width-g2.getFontMetrics().stringWidth(s))/2, (getSize().height-g2.getFontMetrics().getHeight())/2+g2.getFontMetrics().getHeight());
      }
}

The final line in the paint method draws the string, s, in the centre of the screen.  It does this by first establishing the size of the bounding box the string would use, using methods of the graphics context’s getFontMetrics.  It then takes into account the size of the screen, using getSize, and then plots the text using calculated top, left co-ordinates.

Client

import java.io.*;
import java.net.*;

class Client
{
      private static int tempo= 15;
      private static int midiPortNumber= 4445;
      private static int textPortNumber= 4444;

      private static final byte[] noteNumbers= {76, 74, 72, 76, 74, 72, 79, 77, 77, 76, 79, 77, 77, 76, 79, 84, 84, 83, 81, 83, 84, 79, 79, 79, 84, 84, 84, 83, 81, 83, 84, 79, 79, 79, 84, 84, 84, 83, 81, 83, 84, 79, 79, 79, 77, 76, 74, 72};
      private static final byte[] durations= {60, 60, 120, 60, 60, 120, 60, 40, 20, 120, 60, 40, 20, 100, 20, 40, 20, 20, 20, 20, 40, 20, 40, 20, 20, 20, 20, 20, 20, 20, 40, 20, 40, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 40, 20, 60, 60, 120};
      private static final String[] words= {"Three", "Blind", "Mice", "Three", "Blind", "Mice", "See", "How They", "", "Run", "See", "How They", "", "Run", "They All", "", "Ran After", "", "", "The Farmer's", "", "", "Wife", "Who Cut Off Their Tails", "", "", "", "", "With a Carving Knife", "", "", "", "", "Did Ever You See", "", "", "", "", "Such a Thing", "", "", "In Your Life", "", "", "As Three", "", "Blind", "Mice"};

      public static void main(String[] args) throws IOException, UnknownHostException
      {
            BufferedReader kbIn= null;
            kbIn= new BufferedReader(new InputStreamReader(System.in));

            // Connect to MIDI server
            System.out.print("Please enter the name of the MIDI server computer: ");
            String midiServerName= kbIn.readLine();        
            Socket midiSocket= new Socket(midiServerName, midiPortNumber);
            ObjectOutputStream midiOut= new ObjectOutputStream(midiSocket.getOutputStream());

            // Connect to Text Server
            System.out.print("Please enter the name of the Text server computer: ");
            String textServerName= kbIn.readLine();        
            Socket textSocket= new Socket(textServerName, textPortNumber);
        PrintWriter textOut= new PrintWriter(textSocket.getOutputStream(), true);
           
            // Ping Devices
            midiOut.writeObject(new Note((byte) 72, (byte) 0, (byte) 100));
            textOut.println("");
            System.out.println("Client program ready.  Press enter to continue...");
            String d= kbIn.readLine();

The first instruction sent to a server program via a socket always takes much longer than subsequent instructions.  After the first message is sent, all mechanisms have already been invoked to send subsequent instructions.

            // Play Song
            for(int i= 0; i<=47; i++)
            {
                  midiOut.writeObject(new Note(noteNumbers[i], (byte) 64, durations[i]*tempo-20));
                  if(words[i]!="") textOut.println(words[i]);
                  try {Thread.sleep(durations[i]*tempo);} catch(InterruptedException e) {}
            }

This section loops through all 48 elements of the arrays.  Data from the note pitch array and duration array are used to build a Note object which is written to the MIDI server socket.  Note that the duration of this Note object has been reduced by 20ms.  This is to give the MIDI server program a little more time to receive the note, play it, and listen for the next note.

Data from the text array may or may not be present.  If it is present, it is written to the text server socket, otherwise no action is taken.

Finally the loop pauses for the duration of this element.

            // Tidy Up
            midiOut.close();
            textOut.close();
            midiSocket.close();
            textSocket.close();
      }
}

Appraisal

This example is a very simple demonstration of distributed multimedia control.  There is absolutely no fault-tolerance built into the system, and the system is only expected to perform in soft real-time.

Soft real-time means that the audio visual elements of the composition should be played more-or-less in time with the event list (and in time with each other), but there are no guarantees.

From the time the client program initiates an instruction to the time the physical multimedia output occurs might be a few milliseconds or a few tenths of a second.

The Thread.sleep method only guarantees that the program will pause for a minimum amount of time.  There is no guarantee that the program will wake up on time.

System performance will have an effect here.  If all three programs are running on an machine with poor processing power, the server programs will hog processing power and the event list program will tend to wake up later than it should have done.

Remote Method Invocation

What is RMI?

The Remote Method Invocation mechanism is another method that two computers can use to intercommunicate.

The theory behind RMI is that a remote server machine hosts a software object.  The client establishes a reference to this object, from which point the object appears to be local to the client.  The client can now execute methods of this object in exactly the same way it would execute the methods of a local object.

In reality, the client program is using a local object - the server stub.  The server stub is an object which implements the same interface (set of method calls) as the remote server.  The server stub appears to be the server object, but, in actual fact, converts the method call into a series of bytes and sends it, via a socket, to the remote object.

At the other end, the server skeleton converts the byte stream back into a method call and invokes it on the server object.  At this point, the server skeleton may also pass a return variable back to the server stub.  The server stub passes this variable back to the client program.

Before the client can use a remote object, it must obtain a reference to the object by using the naming service.  The naming service holds a list of all available remote objects, including their name and their computer address.

We will be using RMI to build a client-server application.  The client program is a timed list of multimedia instructions (see page 27).  The server program waits for an instruction and performs some sort of multimedia event upon it’s arrival.

How to run the example

What to Type

Open a command line shell and navigate your way to the directory /Examples/RMI/ on the included CD-ROM.  Type rmiregistry and press enter.  The RMI registry must be running before any remote objects can be started.

Open another command line shell on the same machine, and navigate your way to the directory /Examples/RMI/ on the included CD-ROM.  Type java Server  If all goes well, the server should display the message:

Binding Server...

The program will pause for a few moments before printing:

Server ready...
Type 'quit' to exit the Server

If this has not happened then please refer to troubleshooting, on page 71.

Now move to the machine that will be the client (which may be the same machine), open another command line shell, and navigate your way to the directory /Examples/RMI/  Type java client  You should see the following:

Enter the name of the server computer:

Here, type the name of the server computer you are using.  If all goes well, the client will display:

Text to send to TextServer:

Again, if this has not happened, please refer to troubleshooting, on page 71.

If the connection was successful, then type a string into the client window and press enter.  The string should be re-printed in the remote server’s window.

You may also wish to run the \Utils\RMI\RegistryList program, provided on the included CD-ROM, which will list the remote objects running on any particular machine.

What to do if the example doesn’t work

Once an application which uses RMI is successfully up and running, few errors occur.  Unfortunately, RMI applications can be very difficult to get up and running.

In short, the rmiregistry program tells the client and server programs where the stub class files are.  If the rmiregistry can see these class files in it’s local classpath, then it will return references to local files.  This will cause problems if either the client or server program cannot see the rmiregistry’s local file system.

If this is the case, more serious applications will make sure the rmiregistry cannot see the class files, and explicitly tell it where the class files are by using the codebase option when starting the server.

In our case, the least problematic way of getting the example working is to make sure that all stub files and interfaces are in the current directory, before starting any of the programs.  Making copies of the Examples/RMI directory is legal, and will help in this instance.

If the server machine initialises successfully, but the client machine cannot find it, try entering the IP address of the server machine in the client window.

More explanations may be found in \Docs\jdk\guide\rmi\faq.html on the included CD-ROM.

Explanation

This example is composed of the following source files:

·        Server.java  The server program

·        Client.java  The client program

·         ServerInterface.java  The interface that the server implements and which the client makes use of.

·        RegistryList.java  A simple utility which list the names of all objects registered on a specific server

The files Server_Skel.class and Server_Stub.class are both created automatically from the server code by using the rmic command.  To create the stub and skeleton classes for a remote object, use rmic object, where object is the name of a compiled remote object.

ServerInterface

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface ServerInterface extends Remote
{
      void text(String s) throws RemoteException;
}

ServerInterface defines the type of object that a client uses and which the Server class must implement.

At run-time, the client does not have access to the remote Server class, and, therefore, cannot create a instance of it.  Instead, we give the client access to a ServerInterface,  which it can use in lieu of the real Server class.  When writing RMI-based applications, these interfaces are always created before the server or client.

The ServerInterface defines only one method, text().  Because communication may break down between the client and the server, we declare that text() may throw a RemoteException.

Server

import java.rmi.*;
import java.rmi.server.*;
import java.io.*;
import java.net.*;

public class Server extends UnicastRemoteObject implements ServerInterface
{

Any RMI server relies on classes found in the java.rmi and the java.rmi.server packages.  The UnicastRemoteObject object, which is the superclass of Server, contains useful methods that enables Server to be a remote object.

      public Server() throws RemoteException
      {
            super();
      }

This is the default constructor for the server object, and must be present in order for the server to compile correctly.

      public void text(String s) throws RemoteException
      {
            System.out.println(s);
      }

This is the only remote method that the server implements.  It simply displays a string on the screen.

      public static void main(String[] args) throws RemoteException, MalformedURLException, IOException, NotBoundException
      {

Again, we delegate all the exceptions to main, which will quit with an error message.  Note that this is only to make the example simpler to understand, and is not fault-tolerant in any way.

            // Create and bind new Server
            System.out.println("Binding Server...");
            ServerInterface s= new Server();
            String hostName= InetAddress.getLocalHost().getHostName();
            Naming.rebind("//" + hostName + "/Server", s);
            System.out.println("Server ready...");

To create our remote server, we begin by creating an instance of the server object, s.  Note that s is of type ServerInterface.

We then bind object s to the RMI naming service.  Binding an object informs the naming service that the object is active and ready to receive incoming method calls.

When we bind an object, we supply the naming service with a name for the object as a URL-formatted string -rmi://computer/ObjectName.  computer’ is the name of the computer on which the object is running, and ‘ObjectName’ is the name we give to this object (“Server” in our case).  We also pass s to the naming service - this gives the naming service the object’s address, which is ultimately passed to the client program.

            System.out.println("Type 'quit' to exit the Server\n");
            while(new BufferedReader(new InputStreamReader(System.in)).readLine().equalsIgnoreCase("quit")==false) System.out.println("\nType 'quit' to exit the Server\n");

The above section loops, waiting for the user to type ‘quit’.

            // Unbind the Server and exit the program
            System.out.println("Unbinding Server...");
            Naming.unbind("//" + hostName + "/Server");
            System.out.println("Finished!");
            System.exit(0);
      }
}

When the user does type ‘quit’, the server program unbinds itself, which informs the naming service that this program is no longer available.

At this point, the server program has many threads running and does not terminate even though we have reached the end of the main function.  For this reason, we exit manually by using System.exit.

Client

import java.rmi.*;
import java.io.*;

public class Client
{
      public static void main(String[] args) throws NotBoundException, RemoteException, java.net.MalformedURLException, IOException
      {
            BufferedReader kbIn= new BufferedReader(new InputStreamReader(System.in));
            System.out.print("Enter the name of the server computer: ");
            String remoteComputer= kbIn.readLine();
            ServerInterface s= (ServerInterface) Naming.lookup("//" + remoteComputer + "/Server");

The above line creates an object, s, of type ServerInterface.  The program asks the naming service for an object called ‘Server’ running on the machine, whose name is stored in remoteComputer.

The lookup method returns an object of type remote.  We cast this result into an object of type ServerInterface, so that we may use the Server’s remote methods.

            String str= "";
            while(str!=null)
            {
                  System.out.print("Text to send to TextServer: ");
                  str= kbIn.readLine();
                  s.text(str);
            }
            System.out.println("\nClient program exiting");
      }
}

Finally, we loop, getting a string from the keyboard and passing this as a parameter to the remote Server’s method, text.  Note that the call ‘s.text(str)’ treats the remote server, s, no differently from a local object.

Appraisal

When compared to plain sockets, RMI a much more natural method of communicating with remote computers.  It is particularly suitable for building a list of multimedia instructions, as each entry in the list need only take up one line and will have more descriptive name.  Furthermore a remote object may have a rich set of method calls which may be used directly.  To achieve the same functionality using sockets would require a lot of complicated and unnatural mapping.

However, the RMI mechanism uses more overheads than plain sockets, and, as such, takes longer to connect and fractionally longer to transmit messages.  RMI is also much more complicated to set up than plain sockets and requires much configuration and technical knowledge.

Further Information

For further information on how to program using RMI, please refer to:

·        The Java Tutorial (\Docs\tutorial\rmi\index.html on the included CD-ROM)

·        The Java API Documentation (\Docs\jdk\guide\rmi\index.html on the included CD-ROM)

Java Native Interface

What is the Java Native Interface?

The Java Native Interface (JNI) is a mechanism which allows a program written in Java to use executable code which is native to the host operating system.

Please note, this example uses a compiled DLL file and is for PC users only.

JNI is useful for this project, because it gives us access to a machine’s native multimedia features, which Java may not support.

How to run the example

What to type

Open a command prompt window, navigate your way to the directory /Examples/JNI on the included CD-ROM and type java JniTest

You should see the following:

native> Hello World!

Explanation

There are many files in the directory, /Examples/JNI:

·        JniTest.java  The main Java source code, which calls the native method

·        JniTest.class  The main Java program

·        jnitest.dll  The compiled native code

·        JniTest.h  A C header file, which JniTestImp must implement.

·        JniTestImp.c  Native C source code

·        jnitest.exp, jnitest.lib, JniTestImp.obj  Intermediate files created by the C compiler

JniTest,java

This program is very much like the Java “Hello World” example.  The only difference is that we are not using System.out.println to display the string - we are using printText - a native method.

class JniTest
{
      public static native void printText(String s);

We must tell the Java compiler that we are using a method called printText.  However, because this method is implemented using native code, we declare that it is native.  We finish the line with a semicolon, and we do not use curly brackets.

      static {System.loadLibrary("jnitest");}

This line tells the Java runtime system where to find the printText implementation.  The system looks in the path for a file called jnitest.dll.

      public static void main(String[] args)
      {
            printText("Hello World!");
      }
}

Finally, our main method calls the printText method.

JniTest.h

/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class JniTest */

#ifndef _Included_JniTest
#define _Included_JniTest
#ifdef __cplusplus
extern "C" {
#endif
/*
 * Class:     JniTest
 * Method:    printText
 * Signature: (Ljava/lang/String;)V
 */
JNIEXPORT void JNICALL Java_JniTest_printText
  (JNIEnv *, jclass, jstring);

#ifdef __cplusplus
}
#endif
#endif

This file is generated automatically by using the command

javah -jni JniTest

on the JniTest.class file.  The javah command examines the class file for any native method declarations.  It finds the line

public static native void printText(String s);

and converts this into the line

JNIEXPORT void JNICALL Java_JniTest_printText
  (JNIEnv *, jclass, jstring);

which is a function prototype for the corresponding C function.

The actual name of the function is Java_JniTest_printText.  The JNIEnv* parameter is a pointer to the host Java application, which may be used to retrieve variables.  The jclass parameter is a reference to the current object itself, similar to “this” in Java.

The final parameter is the string which we will be passing to the function.

The two other keywords, JNIEXPORT and JNICALL, ensure that the source code compiles on platforms such as Win32 that require special keywords for functions exported from dynamic link libraries.

JniTestImp.c

#include <jni.h>
#include "JniTest.h"
#include <stdio.h>

The jni.h header file is part of the JDK and provides information that the native language code requires to interact with the Java runtime system.  The JniTest.h header file contains the function prototype we are trying to implement.  The stdio.h header file is the standard C library where the printf function is found.

JNIEXPORT void JNICALL
Java_JniTest_printText(JNIEnv *env, jobject obj, jstring jstr)
{
      if(jstr!=NULL)
      {
            const char *str= (*env)->GetStringUTFChars(env, jstr, 0);
            printf("native> %s\n", str);
            (*env)->ReleaseStringUTFChars(env, jstr, str);
      }
}

A string in the C programming language is a pointer to an array of 7-bit ASCII characters.  A string in the Java programming language, however, is represented using Unicode characters.

In order to print strings in our C program, we need to convert from a Java string to a UTF-8 string, which is compatible with 7-bit ASCII.

This is achieved by using the GetStringUTFChars function.  This function is provided by the host Java virtual machine and is accessed via the (*env) variable.

After we have printed our string, we must call the ReleaseStringUTFChars function.  This informs the VM that the native method is finished with the string and the VM can free the memory it was taking.  Failing to call ReleaseStringUTFChars results in a memory leak which will ultimately lead to system memory exhaustion.

The above C code is then compiled into a shared library.  Under Windows 9x/NT this will be a DLL file and under UNIX, this will be a .so file.

The command line to do this using Microsoft Visual C++ 4.0 is

cl -Ic:\java\include -Ic:\java\include\win32

      -LD JniTest.c -Fejnitest.dll

Appraisal

The Java Native Interface is an excellent method of gaining access to efficient native code, whilst still maintaining the benefits of Java-based programming.

However, JNI only caters for the C and C++ programming languages.  In many cases, various conversions between native and Java types must be carried out and can become very ‘messy’.  Using JNI means using many unnatural mappings and may sometimes be very complicated to use.

Further Information

For further information on how to program using the Java Native Interface, please refer to:

·        The Java Tutorial (\Docs\tutorial\native1.1\index.html on the included CD-ROM)

·        The Java API Documentation (\Docs\jdk\guide\jni\index.html on the included CD-ROM)

Native MIDI

Unfortunately, the jmidi API is only available for the PC at present.  This section is for PC users only.

What is Native MIDI?

The JavaSound API synthesises MIDI using software.  This can be quite costly in terms of CPU usage.  A better solution would be to somehow access the native MIDI capabilities of the computer.  For a PC, this would mean gaining access to the MIDI synthesiser on the machine’s soundcard.

This is achieved by making use of the Java Native Interface, as explained on page 77.  We would write native C code to drive the MIDI soundcard, and then create a Java wrapper.

Fortunately this has already been done by Robert Marsanyi [Marsanyi, 1999] in his jmidi package.

We will be using jmidi to create a MIDI server, which is introduced on page 57.  The new MIDI server will implement exactly the same interface as a JavaSound midiChannel.  This means that a MIDI server may be implemented using either jmidi or JavaSound, which is useful for a machine which has no MIDI capabilities.

How to run the example

What to type

Locate the file JavaMidi_04.zip on the included CD-ROM and extract it to a temporary location.  Copy the file \TestJavaMidi\MidiPort.dll to the directory \java\bin\ of your Java installation.  Copy the directory \TestJavaMidi\jmidi\ to the directory \java\ of your Java installation.  Two class files should now exist in the directory \java\jmidi\ of your Java installation.  Finally (and optionally), copy the directory \doc\ to \java\doc\jmidi\ in your Java installation.

Open a command line window and navigate your way to the directory Examples\jmidi on the included CD-ROM.  Type java main and press enter. Something such as the following should appear:

Device 0: AWE64 MIDI Out
Device 1: SB AWE64 MIDI Synth

Enter the number of the MIDI output port:

If this is not the case, please ensure that MidiPort.dll is in the current system path.  The instructions on how to set up classpaths and packages on page 34 may also be of help.

If all went well, type the number of the device to which the MIDI messages will be send and press enter.

Sometimes, attempting to produce the above list using jmidi will cause a fault in the operating system, and the example will not work.  If this is the case, move to the directory Examples\jmidi\NoList, type java main and press enter.  This version of the program is identical, except it produces no list of devices - experiment with values such as 0, 1 or 2 to see which port produces audible output.

What to expect

Providing that the audio output of the computer is active, you should hear the machine play a scale from Middle C to C the octave above.

Explanation

import jmidi.*;
import java.io.*;

The jmidi package contains all the Java functions required to access native MIDI devices.  Some of these functions will use native methods.

class Main
{
      public static void main(String[] args) throws MidiPortException, IOException
      {
            for(int i= 0; i<MidiPort.getNumDevices(MidiPort.MIDIPORT_OUTPUT); i++) System.out.println("Device " + i + ": " + MidiPort.getDeviceName(MidiPort.MIDIPORT_OUTPUT, i));

The above line prints out a list of all MIDI output devices using the getDeviceName function.  See jmidi API documentation for more information on getDeviceName and getNumDevices.

            System.out.print("\nEnter the number of the MIDI output port: ");
            int midiPort= new Integer(new BufferedReader(new InputStreamReader(System.in)).readLine()).intValue();

            MidiPort mp= new MidiPort(0, midiPort);
            mp.open();

The above section gets the device number from the user and attempts to open the device.  The device must be opened before any MIDI messages can be sent to it.

            mp.writeShortMessage((byte)0xB0, (byte) 0);
            for(int i= 60; i<=72; i++)
            {
                  mp.writeShortMessage((byte)0x90, (byte)i, (byte)127);
                  try {Thread.sleep(400);} catch (InterruptedException e) {}
                  mp.writeShortMessage((byte)0x80, (byte)i, (byte)0);
            }

The first MIDI message we send is a program change.  A MIDI message is of the form (status byte, data byte) or (status byte, data byte 1, data byte 2).  The 4 MSB of the status byte is an instruction and the 4 LSB of the status byte is the channel number. MIDI programmers often use hexadecimal to represent the status byte.  Hexadecimal values begin with 0x.

The program change message is of the form (0xB0, 0).  This means we are sending the message to channel 0, with a program value of 0 (Grand Piano).

The next section loops i through 60 to 72 which represent the notes middle C to C one octave above.  It sends a note on message, (0x90) with note number i, and velocity 127.  It then waits for half a second and sends a note off message (0x80) with note number i and velocity 0.  The note off method must be called to switch off notes which would otherwise sound indefinitely.

            mp.close();
      }
}

Finally, we close the MIDI port.  The MIDI port must be closed before any other programs can use it.  Problems are often caused if a program crashes and exits without closing the MIDI port.  In this case, claiming the MIDI port back can be very difficult and often requires a system re-boot.

Appraisal

The jmidi package is an excellent means of gaining access to the native MIDI capabilities of a PC.

However, the package is based around MIDI messages and does not provide descriptive function names as the JavaSound MIDI package does.  For example, compare noteOn(60, 127) (JavaSound) with writeShortMessage((byte)0x90, (byte)60, (byte)127) (jmidi).

This means that the programmer must understand the nature of MIDI messages, and may need to refer to various tables.

Further Information

For further information on how to program using the jmidi package, please see the jmidi homepage, http://www.softsynth.com/javamidi

Example: Minim

This example uses the jmidi package which is only available for the PC.  Therefore this section is for PC users only.

What is this example?

This example is a distributed realisation of a musical composition.  It uses a MIDI server, which uses the jmidi package to gain access to the host machine’s native MIDI synthesiser.  The MIDI server uses RMI to receive instructions from the client.

The client program establishes references to five MIDI servers and then runs through a list of MIDI server method calls.

This example demonstrates native MIDI and real-time media control using RMI.

How to run the example

As with the previous RMI example, getting the programs up and running is not always straightforward.

This demonstration uses up to 6 computers.  5 machines will act as MIDI servers and one machine will act as the client.  However, the example is designed so that any number of machines may be used, including just one.

How to Run the Servers

Make sure the jmidi package is installed on all the machines which will be acting as MIDI servers - refer to page 84 for details on how to do this.

To run the MIDI server, copy the directory \Examples\Minim on the included CD-ROM to the local hard disc.  (You may instead copy the directory to a networked drive, providing all concerned machines can ‘see’ it).

Open a command prompt window and navigate your way to the Minim directory just created.  Type rmiregistry and press enter.  Don’t worry if nothing happens - the rmiregistry program produces no output.

Now open another command prompt window and navigate your way to the same directory.  Type java MIDIServer and press enter.  You should see something such as the following:

Device 0: AWE64 MIDI Out
Device 1: SB AWE64 MIDI Synth

Enter the number of the MIDI output port:

Type the number of the MIDI output port you would like the MIDI server to play through.  You should then (hopefully) see the following:

MIDI synthesisier initialised

Binding MIDIServer(0)...
Binding MIDIServer(1)...
Binding MIDIServer(2)...
Binding MIDIServer(3)...
Binding MIDIServer(4)...
Binding MIDIServer(5)...
Binding MIDIServer(6)...
Binding MIDIServer(7)...
Binding MIDIServer(8)...
Binding MIDIServer(9)...
Binding MIDIServer(10)...
Binding MIDIServer(11)...
Binding MIDIServer(12)...
Binding MIDIServer(13)...
Binding MIDIServer(14)...
Binding MIDIServer(15)...

MIDIServer Ready!
Type 'quit' to exit the Server

If this is not the case, then please refer to jmidi installation tips on page 84 or the RMI installation tips on page 71.  Sometimes, attempting to produce the list of MIDI ports will cause a fault in the operating system, as explained on page 84.  If this is the case, change directory to \Examples\Minim\NoList and type java MIDIServer

Repeat the above process on all machines that will be acting as MIDI servers.

How to Run the Client

The client program may be run on a separate machine or the same machine as one of the MIDI servers.  Open a command prompt window and navigate your way to the directory \Examples\Minim.  Type java Client and press enter.  You will see the following:

Enter the name of the MIDI server computer 0:

The piece of music has 5 parts, as follows:

·        Part 0           Pan pipe 1

·        Part 1           Pan pipe 2

·        Part 2           Glock

·        Part 3           Harps

·        Part 4           Bass

At the above prompt, enter the name (or IP address) of a machine running the MIDI server.  If the client successfully connects to that machine, you will see:

Enter the name of the MIDI server computer 1:

If this has not happened, please refer to the RMI installation tips on page 71.  Otherwise, enter the name of another machine running the MIDI server.  You may use the same machine for all five MIDI servers, five different machines, or a combination of both.  (The client is designed to connect to any number of machines from one to five - providing all machines are successfully running the MIDI server.)  Repeat the same process for machines 2, 3 and 4.

Providing the client successfully connects to all 5 machines, you should hear a short piece of music, “Minim” written by Rick Cocker [Cocker, 1989].

Explanation

The example is composed of the following files

·        Client.class, Client.java  The client program

·        MIDIServer.class, MIDIServer.java  The MIDI server program

·        MIDIServerInterface.class, MIDIServerInterface.java  The interface which the MIDI server implements, and which the client program makes use of

·        MIDIServer_Skel.class, MIDIServer_Stub.class Automatically generated stub and skeleton files.

MIDIServerInterface

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface MIDIServerInterface extends Remote
{
      void allNotesOff() throws RemoteException;
      void allSoundOff() throws RemoteException;
      void controlChange(int controller, int value) throws RemoteException;
      int getChannelPressure() throws RemoteException;
      int getController(int controller) throws RemoteException;
      boolean getMono() throws RemoteException;
      boolean getMute() throws RemoteException;
      int getPitchBend() throws RemoteException;
      int getPolyPressure(int noteNumber) throws RemoteException;
      int getProgram() throws RemoteException;
      boolean getSolo() throws RemoteException;
      boolean localControl(boolean on) throws RemoteException;
      void noteOff(int noteNumber) throws RemoteException;
      void noteOff(int noteNumber, int velocity) throws RemoteException;
      void noteOn(int noteNumber, int velocity) throws RemoteException;
      void programChange(int program) throws RemoteException;
      void programChange(int bank, int program) throws RemoteException;
      void resetAllControllers() throws RemoteException;
      void setChannelPressure(int pressure) throws RemoteException;
      void setMono(boolean on) throws RemoteException;
      void setMute(boolean mute) throws RemoteException;
      void setPitchBend(int bend) throws RemoteException;
      void setPolyPressure(int noteNumber, int pressure) throws RemoteException;
      void setSolo(boolean soloState) throws RemoteException;
}

MIDIServerInterface defines the type of object that a client uses and which the MIDIServer class must implement.  At run-time, the client does not have access to the remote MIDIServer class, and, therefore, cannot create a instance of it.  Instead, we give the client access to a MIDIServerInterface,  which it can use in lieu of the real MIDIServer class.  When writing RMI-based applications, these interfaces are always created before the server or client.

The MIDIServerInterface is intended to look exactly the same as the MidiChannel interface defined in the JavaSound API. (See “Interface MidiChannel” in the JavaSound API documentation for details.)

As stated in limitations (page Error! Bookmark not defined.), the jmidi package uses a very low level view of a MIDI device.  A composer requires a more natural and descriptive means of accessing a MIDI device, which is why I have chosen to create such an interface to the MIDI server.

At this stage, I could have attempted to design my own MIDI device interface, but I chose to ‘borrow’ the MidiChannel interface proposed by the JavaSound API.  Presumably, the MidiChannel interface has been well though out, and tried and tested by the Java team. Another plus is that building a MIDI server (implementing the MIDIServerInterface) which makes use of the JavaSound API would be very straightforward and would require no reengineering.

The above interface, therefore, is almost a direct copy of the MidiChannel interface, except each method may now throw a RemoteException.  This is to inform the client that communication between the client and the server has broken down.

Client

import java.rmi.*;
import java.io.*;

class Client
{
      private static int tempo= 4;

      public static void main(String[] args) throws IOException, NotBoundException, RemoteException, java.net.MalformedURLException, InterruptedException

As always, we have delegated any exceptions to the main function, which will quit with an error.  Remember that this makes the system very un-fault-tolerant.

      {
            // Connect to MIDI Servers
            MIDIServerInterface[] ms= new MIDIServerInterface[5];
            for(int i= 0; i<=4; i++)
            {
                  System.out.print("Enter the name of the MIDI server computer "+i+": ");
                  ms[i]= (MIDIServerInterface) Naming.lookup("//"+new BufferedReader(new InputStreamReader(System.in)).readLine()+"/MIDIServer"+i);
            }

The above section first sets up an array of 5 remote MIDI servers, ms[].  It then loops through each server, asks the user for the name of the remote machine, and then connects to that machine.

Remember that a MIDI server actually exports 16 MIDI server objects representing the 16 channels of the corresponding MIDI device.  In the above section ms[i] refers to MIDI server number i on the host machine.  It is for this reason that all five MIDI server references may actually refer to the same machine.

            // Play Song
            for(int i= 1; i<=2; i++)
            {
                  ms[0].programChange(77);
                  ms[0].controlChange(10, 1);
                  ms[0].noteOn(70, 100);
                  ...
                  Thread.sleep(48*tempo);
                  ms[3].noteOn(84, 0);
            }
      }
}

The rest of the client program simply steps through a long list of method calls to the five remote MIDI servers and short pauses (Thread.sleep).  The whole list is played twice to make the composition a little longer.

The client score was created by first saving the music as a type 0 MIDI file.  The MIDI file was then passed through a MIDI parser which had been adapted to produce a list of remote method calls.  The original MIDI file program can be found in \Utils\MIDIParser\ on the included CD-ROM.  It uses the Rogus McJava package [Denckla, 1999] - copy the contents of the file \Software\rogus.zip to the directory \java\rogus\ in your Java installation.

MIDIServer

The explanation of the MIDI server does not cover the workings of a general RMI server, which can be found on page 73.  An explanation of how jmidi works can be found on page 85.

import jmidi.*;
import java.rmi.*;
import java.rmi.server.*;
import java.io.*;

public class MIDIServer extends UnicastRemoteObject implements MIDIServerInterface
{
      public static MidiPort mp= null;
      public int channelNumber;
      private static String hostName;

The MIDI server is exceptional because the single program exports 16 objects, each implementing a MidiChannel.  This is achieved by creating and binding 16 instances of the MIDIServer class.  Each instance shares the same MidiPort, which is declared to be static.  However, each instance has its own channel number, to which it passes incoming instructions.

      public MIDIServer(int channelNumber) throws RemoteException
      {
            super();
            this.channelNumber= channelNumber;
      }

This is the constructor for a MIDI server.  The constructor must call its superclass’s constructor in order to initialise remote functionality.  It then assigns the object with its own channel number.

      public static void main(String[] args) throws MidiPortException, java.net.UnknownHostException, IOException
      {
            hostName= java.net.InetAddress.getLocalHost().getHostName();

            // Initialise MIDI Synthesizer
            for(int i= 0; i<MidiPort.getNumDevices(MidiPort.MIDIPORT_OUTPUT); i++) System.out.println("Device " + i + ": " + MidiPort.getDeviceName(MidiPort.MIDIPORT_OUTPUT, i));
            System.out.print("\nEnter the number of the MIDI output port: ");
            int midiPortNumber= new Integer(new BufferedReader(new InputStreamReader(System.in)).readLine()).intValue();
            mp= new MidiPort(0, midiPortNumber);
            mp.open();
            System.out.println("MIDI synthesisier initialised\n");

The main function is run once - when the server program is run. As in the jmidi example, the program begins by asking the user which MIDI output port they would like the server to use.

            // Create and bind 16 MIDIServers
            MIDIServerInterface[] ms= new MIDIServerInterface[16];
            for(int i= 0; i<=15; i++)
            {
                  System.out.println("Binding MIDIServer("+i+")...");
                  try {ms[i]= new MIDIServer(i);}  catch(Exception e) {quitWithError(e);}
                  try {Naming.rebind("//"+hostName+"/MIDIServer"+i, ms[i]);}  catch(Exception e) {quitWithError(e);}
            }
            System.out.println("\nMIDIServer Ready!");

The program then loops 16 times, each time instancing a MIDIServer object with a unique channel number, and then binding the object with the local name server.

            // Loop, waiting for the user to exit the program
            System.out.println("Type 'quit' to exit the Server\n");
            while(new BufferedReader(new InputStreamReader(System.in)).readLine().equalsIgnoreCase("quit")==false) System.out.println("\nType 'quit' to exit the Server\n");

            quit();
            System.exit(0);
      }


      public static void quitWithError(Exception e)
      {
            System.out.println("!!! " + e.getClass().getName());
            System.out.println(e.getMessage()+"\n");
            quit();
            System.exit(-1);
      }


      public static void quit()
      {
            // Close MIDI Synthesizer
            try {mp.close();}  catch(Exception e) {}
            System.out.println("\nMIDI synthesiser closed");

            // Unbind 16 MIDIServers
            for(int i= 0; i<=15; i++)
            {
                  System.out.println("Unbinding MIDIServer("+i+")...");
                  try {Naming.unbind("//"+hostName+"/MIDIServer"+i);}
                  catch(Exception f) {}
            }
      }

The MIDI server is slightly more fault-tolerant than other example programs in this project.  If an error does occur, the program will (in general) execute this quit method.  The most important thing this quit method does is to close the MIDI synthesiser.  Remember from the jmidi example that if the program exits without closing the MIDI synthesiser, a system re-boot is often required to reclaim it.

The quit method also unbinds the 16 MIDI servers, because they are now no longer available.

      // MidiChannel Implementation
      // --------------------------
      public void noteOff(int noteNumber)
      {
            try {mp.writeShortMessage((byte) ((0x80)+channelNumber), (byte) noteNumber, (byte) 64);}
            catch (MidiPortException e) {quitWithError(e);}
      }

      public void noteOff(int noteNumber, int velocity)
      {
            try {mp.writeShortMessage((byte) ((0x80)+channelNumber), (byte) noteNumber, (byte) velocity);}
            catch (MidiPortException e) {quitWithError(e);}
      }

      public void noteOn(int noteNumber, int velocity)
      {
            try {mp.writeShortMessage((byte) ((0x90)+channelNumber), (byte) noteNumber, (byte) velocity);}
            catch (MidiPortException e) {quitWithError(e);}
      }

      public void programChange(int program)
      {
            try {mp.writeShortMessage((byte) ((0xC0)+channelNumber), (byte) program);}
            catch (MidiPortException e) {quitWithError(e);}
      }


      public void programChange(int bank, int program)
      {
            try {mp.writeShortMessage((byte) ((0xB0)+channelNumber), (byte) 0, (byte) bank);}
            catch (MidiPortException e) {quitWithError(e);}

            try {mp.writeShortMessage((byte) ((0xC0)+channelNumber), (byte) program);}
            catch (MidiPortException e) {quitWithError(e);}
      }


      public void controlChange(int controller, int value)
      {
            try {mp.writeShortMessage((byte) ((0xB0)+channelNumber), (byte) controller, (byte) value);}
            catch (MidiPortException e) {quitWithError(e);}
      }

The above section is the core code of the MIDI server.  It implements the functions defined in the MIDIServerInterface, using the jmidi package.  This is just a case of sending MIDI messages with appropriate parameters to the native MIDI device using the writeShortMessage method of the MidiPort, which all instances of MIDI server share.  Note than the status byte of each message is always has channelNumber added to it.  channelNumber is a member variable specific to each MIDI server instance.

      // 'Dummy' Implementations of Remaining Methods
      public void allNotesOff() {}
      public void allSoundOff() {}
      public int getChannelPressure() { return 0; }
      public int getController(int controller) { return 0; }
      public boolean getMono() { return false; }
      public boolean getMute() { return false; }
      public int getPitchBend() { return 0; }
      public int getPolyPressure(int noteNumber) { return 0; }
      public int getProgram() { return 0; }
      public boolean getSolo() { return false; }
      public boolean localControl(boolean on) { return false; }
      public void resetAllControllers() {}
      public void setChannelPressure(int pressure) {}
      public void setMono(boolean on) {}
      public void setMute(boolean mute) {}
      public void setPitchBend(int bend) {}
      public void setPolyPressure(int noteNumber, int pressure) {}
      public void setSolo(boolean soloState) {}
}

Although I have chosen to implement the MidiChannel interface defined in the JavaSound API for the reasons outlined earlier, I have chosen not to implement every single method.  The above methods are not essential for this example, and are implemented with ‘dummy’ or empty functionality.

Appraisal

The above example successfully controls a number of distributed musical output devices.  The multimedia output resources are modelled as a number of software objects hosted by various machines.  This model works well and is more intuitive than other means.

Furthermore, the client program is not overly difficult to understand.  In the above example, I chose to use an array of five MIDI server, but I could just as easily used five well-named variables (e.g. harps, bass, pan-pipes etc.)  The remote methods are named well enough to be readable by the human eye (e.g. noteOn, programChange) and even these names are not necessarily fixed.

However, as in the Three Blind Mice example, the system only performs in soft real time.  During playback the listener may have detected slight (or severe) timing errors.  This is due to two reasons:

1) the latency inherent calling remote methods - i.e. the time taken between a client initiating an instruction and the multimedia object realising it.

2) the fact that the client program uses the Thread.sleep method as a means of pausing.  As mentioned on page Error! Bookmark not defined., the Thread.sleep method only guarantees that the client program will pause for a minimum amount of time.  Depending upon the CPU usage of the system, the client program may wake up later than it should have done.

The second problem may be resolved by using a much stricter pause routine.  The first problem, however, may, perhaps, only be resolved by rethinking the entire architecture.

Furthermore, although in itself, the client program is simple, it is not simple to create.  The client in this case was created by composing a type 0 MIDI file and then parsing it, as explained on page 93.

Java IDL

What is Java IDL?

Java IDL (Interface Definition Language) is an alternative to RMI which is special because distributed objects need not be written in Java.

Java IDL conforms to the specifications laid out by the Common Object Request Broker Architecture (CORBA).  CORBA is an industry standard distributed object model developed by the Object Management Group (OMG) - an industry consortium.  It should be pointed out that CORBA is not a piece of software - it is a set of specifications.

IDL is a part of these specifications and is a language used to define an object.  The Java language uses the interface keyword to define an interface, which is implemented using Java code.  IDL is also used to define an interface, but which may be implemented in any programming language that supports CORBA, such as Java, C or C++.

Every language which supports CORBA has its own IDL mappings.  When an IDL source file is compiled, some source code is automatically generated.  This source code serves as the basis for implementation of the objects, in the corresponding language.  The mapping defines what source code is produced.  Java IDL, is the IDL to Java mappings.


How does CORBA work?


The overall CORBA architecture is shown in figure 3.


Figure 3 - CORBA architecture

The first thing to notice is that the server stub and skeleton are both present - they are used in exactly the same way as in Java RMI.

However, the client and server stub do not communicate directly with each other, as they do under RMI.  Instead they communicate with each other via an Object Request Broker (ORB).

An ORB acts as a message bus between objects.  When the client calls a remote method, the client stub makes use of the ORB’s connection capabilities, which forwards the invocation to the ORB used by the server skeleton.  ORBs communicate with each other via the Internet Inter-ORB Protocol (IIOP) - a standard which is part of the CORBA specifications.

As well as communication, an ORB may provide some extra utilities, e.g. a naming service, object persistence or transaction processing.  The ORB provided with Java 1.2 provides the communications services mentioned above as well as a simple naming service.

It should be pointed out that using a stub and skeleton to provide distributed communication, it is not the only way which is supported by CORBA.  Other methods include interfacing directly to the ORB, or by using the Dynamic Invocation Interface which is used when the client program is unaware of the remote object’s interface.

Of what use is it to us?

The CORBA specification allows us to build multimedia output objects using any programming language which has IDL mappings.  Although the Java Native Interface allows to use native code, it relies on C or C++.  Languages supported by CORBA include: Java, C, C++, COBOL, Ada95 and Smalltalk.

Using CORBA allows us to build multimedia output objects on platforms for which no Java Virtual Machine exists.  Furthermore, as described on page 15, part of the CORBAtelecoms specification is concerned with streaming data in real-time.  The previous RMI example was criticised for performing only in soft real time.  Any improvements to time it takes to send and receive a remote instruction are certainly worth considering.

How to run the example

First of all, the naming service must be started.  Open a command line window and type:

tnameserv -ORBInitialPort 1050

This program is successfully running when it produces output such as the following:

Initial Naming Context:
IOR:000000000000002849444c3a6f6d672e6f72672f436f734e616d696e672f4e616d696e67436f6e746578743a312e300000000001000000000000002c00010000000000056164616d0000045a00000018afabcafe00000002c8938f7f000000080000000000000000
TransientNameServer: setting port for initial object references to: 1050

To run the server, open a command line window, and navigate your way to the directory /Examples/IDL on the included CD-ROM.  Type:

java TestServer -ORBInitialPort 1050

You should see the following output:

Binding Server...
Server ready
Type 'quit' to exit the Server

If this is not the case, please refer to The Java Tutorial, IDL troublshooting at http://java.sun.com/docs/books/tutorial/idl/hello/compiling.html If the server has successfully bound itself with the naming service, move to the machine that will be the client.  Open a command line window, and navigate your way to the directory /Examples/IDL on the included CD-ROM.  Type

java TestClient -ORBInitialPort 1050 -ORBInitialHost host

where host is the name of the machine running the naming service.  You should see the following output:

Connecting to server...
Connected

Text to send to TextServer:

Type a string into the client window and press enter.  The string should be re-printed in the remote server’s window.

You may also wish to run the \Utils\IDL\NameClientList program, provided on the included CD-ROM, which will list the remote objects running on any particular machine.  Move to the directory \Utils\IDL and type java NameClientList -ORBInitialPort 1050 -ORBInitialHost host

where host is the name of the machine running the naming service.

Explanation

This example consists of the following files:

·        Test.idl  The server object interface, written in IDL

·        TestClient.java  The client program

·        TestServant.java  The core server functionality, i.e. an implementation of the Test interface.

·        TestServer.java  The main server program, which creates and binds a new server.

The following files are created automatically, when compiling Test.idl:

·        Test.java  The server object interface converted to Java

·        TestHelper.java, TestHolder.java  Two utility classes created automatically which contain useful functions specific to Test.

·        _TestImplBase.java  The server skeleton

·        _TestStub.java  The server stub

Test.idl

interface Test
{
      void text(in string s);
};

This is the definition of our simple server written in IDL.  For more information on IDL, please see http://www.omg.org/corba/cichpter.html#idls&s  The interface defines one method, text, with a single string parameter.  The keyword in is used to denote that the parameter is passed from the client to the server.

An IDL file is compiled and generates a Java version of the interface, as well as stub/skeleton and other utility files. Please see http://java.sun.com/docs/books/tutorial/idl/hello/ for more information on how to obtain and use the IDL to Java compiler.

Test.java

/*
 * File: ./TEST.JAVA
 * From: TEST.IDL
 * Date: Wed Aug 25 15:38:26 1999
 *   By: idltojava Java IDL 1.2 Aug 18 1998 16:25:34
 */

public interface Test
    extends org.omg.CORBA.Object, org.omg.CORBA.portable.IDLEntity {
    void text(String x)
;
}

This is the first file the IDL to Java compiler creates.  It is the IDL interface expressed using Java syntax.  This interface is implemented in TestServant file.

TestServer

The file TestServer.java performs the ‘administrative’ work of setting up the remote object.

import org.omg.CosNaming.*;
import org.omg.CosNaming.NamingContextPackage.*;
import org.omg.CORBA.*;
import java.io.*;

The CosNaming packages are essential packages required to interact with the COS Naming Service.  The COS naming service is specified by the OMG and supplied with Java 1.2.  All applications which use CORBA also need the CORBA package.

public class TestServer
{
      public static void main(String args[]) throws org.omg.CORBA.ORBPackage.InvalidName, org.omg.CosNaming.NamingContextPackage.CannotProceed, org.omg.CosNaming.NamingContextPackage.InvalidName, org.omg.CosNaming.NamingContextPackage.NotFound, InterruptedException, IOException
      {

As always, most of the exceptions thrown though out the server program are handed to the main function, which just prints out an error and exits.  Although this simplifies the example, it produces a somewhat non fault tolerant system.

            System.out.println("Binding Server...");
            ORB orb= ORB.init(args, null);

Gaining a reference to the remote object is slightly more complex than when using conventional Java RMI.

We begin by creating an ORB, and initialising it with arguments passed from the command line.

            TestServant TestRef= new TestServant();
            orb.connect(TestRef);

We then create an instance of the test servant, and inform the ORB of its presence.

            org.omg.CORBA.Object objRef= orb.resolve_initial_references("NameService");

The next stage is to inform the naming service of the test servant.  The first step is to establish a reference to the name service.  The above line creates a CORBA object, which is a reference to the name server.

            NamingContext ncRef= NamingContextHelper.narrow(objRef);

The resolve_initial_references method returns an object of type org.omg.CORBA.Object.  We use the narrow method of the NamingContextHelper to convert the CORBA object, objRef, into a NamingContext object, ncRef.

            NameComponent nc= new NameComponent("Test", "");
            NameComponent path[]= {nc};
            ncRef.rebind(path, TestRef);

            System.out.println("Server ready");

The final stage is to bind the servant object.  Using RMI, we supplied a string and a reference to the object in order to bind it.  Using COS naming, however, we must supply a NameComponent object.

A NameComponent holds a single element of a name.  An array of NameComponent can therefore hold a fully specified path to an object on any computer system.

            System.out.println("Type 'quit' to exit the Server\n");
            while(new BufferedReader(new InputStreamReader(System.in)).readLine().equalsIgnoreCase("quit")==false) System.out.println("\nType 'quit' to exit the Server\n");

            ncRef.unbind(path);
            System.exit(0);
      }
}

The final section loops, waiting for the user to type ‘quit’.  When this happens, the program unbinds the server and manually exits the program.

TestServant

class TestServant extends _TestImplBase
{
      public void text(String s)
      {
            System.out.println(s);
      }
}

The test servant is very simple.  It extends _TestImplBase which is created automatically by the IDL to Java compiler. _TestImplBase implements the server skeleton, which means the only thing left for the test servant to do is to implement the methods defined in Test.IDL.

This is done using ‘normal’ Java - all the code which performs complex mapping or manipulation of the ORB is held in the main server program.  The test servant simply implements the interface held in Test.java.

TestClient

import org.omg.CosNaming.*;
import org.omg.CORBA.*;
import java.io.*;

public class TestClient
{
      public static void main(String[] args) throws org.omg.CORBA.ORBPackage.InvalidName, org.omg.CosNaming.NamingContextPackage.NotFound, org.omg.CosNaming.NamingContextPackage.CannotProceed, org.omg.CosNaming.NamingContextPackage.InvalidName, IOException
      {
            System.out.println("Connecting to server...");
            ORB orb= ORB.init(args, null);  // Create a new orb
            org.omg.CORBA.Object objRef= orb.resolve_initial_references("NameService");  // Get a reference to the name service as a CORBA.object
            NamingContext ncRef= NamingContextHelper.narrow(objRef);  // Convert the CORBA.Object to a NamingContext
            NameComponent nc= new NameComponent("Test", "");                  // Create an array of NameComponents
            NameComponent path[]= {nc};

The client obtains a reference to the name server and composes a NameComponent in exactly the same way the server does. 

            Test t= TestHelper.narrow(ncRef.resolve(path));  // Get a reference to a Test object
            System.out.println("Connected\n");

The client then attempts to obtain a reference to the TestServer.  The resolve method of the naming service (hopefully) returns a reference to the remote TestServer object in the form of an org.omg.CORBA.Object.  The narrow method of the TestHelper object converts this into an object of type Test.  The TestHelper object is generated automatically when the IDL file is compiled.

            BufferedReader kbIn= new BufferedReader(new InputStreamReader(System.in));
            String str= "";
            while(str!=null)
            {
                  System.out.print("Text to send to TextServer: ");
                  str= kbIn.readLine();
                  if(str!=null) t.text(str);
            }
            System.out.println("\nClient program exiting");
      }
}

The rest of the client program simply obtains a string from the user and uses it as a parameter when calling the remote method, text.  Notice that in the above section, the remote object, t, is treated in exactly the same way as a local object.

Appraisal

As with RMI, Java IDL provides a natural method of communicating with remote computers.  It is suitable for building a list of multimedia instructions, as each entry in the list need only take up one line and will have a descriptive name.

However, when compared to RMI, Java IDL often takes longer to make initial connections.  Furthermore, Java IDL is much more complicated than RMI and requires a working knowledge of the CORBA specifications.  The IDL to Java mappings also add an extra layer of complexity.

Further Information

For further information on how to program using Java IDL, please refer to:

·        The Java Tutorial (\Docs\tutorial\idl\index.html on the included CD-ROM)

·        The Java API Documentation (\Docs\jdk\guide\idl\index.html on the included CD-ROM)

Conclusion

Project Appraisal

The objectives of this project (page 26) can be summarised as follows:

·        Investigate candidate technologies which permit:

1.      Encapsulation of multimedia output objects;

2.      Distribution of objects;

3.      Use of a ‘score’ which controls the output objects.

·        Construction of a demonstrator system.

·        Identification of the ‘best’ candidate technology.

 

Although the context section identified several candidate technologies, this project has so far only investigated one: Java.  The other principle candidate technology, CORBA was only touched upon.  Furthermore, at this stage in the project the ‘best’ candidate technology cannot reliably be proposed.

However, with respect to distributed multimedia sequencing, this project has investigated the capabilities of Java, which are summarised here.  For detailed information, please refer to the ‘appraisal’ in the corresponding section of the ‘implementation’.

·          Graphics 2D     Shipped as standard with Java 1.2 and has a very rich set of graphics functions.  At present the only way of getting a full-screen graphics window appears to be a performance bottleneck, which makes complex animations difficult.

·          JavaSound        Easy-to-use platform-independent MIDI synthesiser and digital audio functions.  Software wavetable synthesis doesn’t sound so good and is costly in CPU terms.

·          Sockets             Quick and straightforward means of remote communication.  Unnatural and inconvenient to use with respect to distributed multimedia sequencing.

·          RMI                  Natural, object-orientated means of remote communication, which is good for distributed multimedia sequencing.  Often difficult to get up and running, perhaps slightly slower than plain sockets.

·          JNI                    Unleashes the power of native code, whilst still maintain other benefits of Java.  Only supports C and C++ with ‘messy’ mappings.

·          jmidi                Simple access to native MIDI devices.  Works on low-level MIDI messages, which may be unnatural and overly technical to use.

·          Java IDL          Natural, object-orientated means of remote communication, which is good for distributed multimedia sequencing.  Is CORBA-compliant which makes object open for use by other systems.  Not straightforward to use, initial connection overhead is high.

This project has, however, succeeded in achieving multimedia object encapsulation, distribution and scoring:

Encapsulation of Multimedia Output Objects

The first encapsulation of a multimedia output object was a MidiChannel, introduced in the JavaSound section.  A MidiChannel is a distinct, autonomous object and has a very well-defined interface to the outside world.  It encapsulates and abstracts a single channel of a MIDI synthesiser and provides a complete set of methods for control and query.

The text server introduced in the Three Blind Mice example (page 57) was also an encapsulation of a multimedia output object.  It, too, was a distinct, autonomous object, but was based on sockets and, as such, not quite as natural to use as the MidiChannel.

An object as defined by Object Orientated techniques and implemented by Java is, in itself, an encapsulation of a system.  This is why Java and OO techniques in general are particularly good at encapsulating multimedia output systems.

Distribution of Objects

Distribution of objects was first achieved using sockets, as in the Three Blind Mice example.  Sockets are a ‘no-nonsense’ means of communication and are considered to be the ‘baseline’ in terms of inter-computer communication speed.  They are, however, only able to process streams of bytes.  Any other type of data must first be ‘serialised’ into a stream of bytes.  Sockets are also built upon ports, which must always be established in advance.

The Minim example achieved distribution by using the Remote Method Invocation mechanism provided by Java.  RMI is based upon sockets, but the source message passes through the server stub, through the socket and then through the server skeleton before it is received.  For this reason, communication via RMI can never be as quick as via a plain socket.

RMI, however, is much more natural to use in the case of distributed multimedia sequencing.  In order for a client to gain access to the functionality provided by e.g. a MIDI server (see page 95) via a socket, the client would either need to represent all function calls as a series of bytes, or to perform some sort of conversion.  RMI performs this conversion automatically, and provides the client with, what appears to be, a local MIDI server object.  This ‘naturalness’ is the RMI’s forte.

Object distribution may also be provided by Java IDL via an ORB.  Unfortunately this project has not managed to compare the speeds of IDL and RMI.  IDL, however, possesses all the advantages of RMI, along with the fact that remote objects may be written in any programming language which has an IDL mapping.

Scoring

The Three Blind Mice and Minim examples both use a client program which controls the multimedia output objects.  This client program could be called a score; an event list; a script; a sequencer.

At present the score is encoded as a list of remote method calls and short pauses.  As the score is a Java program, it must be compiled before use.  This scoring mechanism is conceptually very simple, but the process of actually producing a score can be very laborious indeed.

Furthermore, the client score uses Thread.sleep(n milliseconds) to provide the short pauses between events.  This means is flawed because it only guarantees that the script will pause for a minimum of n milliseconds.  Depending upon the usage of the CPU, the client score may wake up after n+1, n+100 or n+1000 milliseconds.  It is for this reason that the Minim example sounds a little odd - as if the music is struggling to be played.

Demonstrator Systems

The project produced a number of example programs, each demonstrating a feature of Java.  Furthermore, these features were put together to produce two small scale demonstrator systems - Three Blind Mice (page 57) and Minim (page 88).  However, as explained on page 28 and in the next section, a larger scale demonstrator is yet to be built.

The Practical Stage

The final stage is to use the knowledge gained during the project to implement a larger demonstrator system, which will be a distributed realisation of a multimedia composition.  As stated in the objectives (page 28) this demonstrator system will be presented at the University of York Music Technology concert on 22nd September 1999, almost a month after this report is sent for binding.

This final, practical, stage will also be documented and will form an addendum to this report.  The final stage will continue the work done in this project, and will focus on:

·        Using C to build CORBA-compliant remote objects

·        Building a graphical output object

·        Building a lights controller object

·        Finding a more reliable method of creating pauses between events in the client score

·        Building a distributed realisation of a multimedia composition

Further Work

The previous work performed in the field of distributed multimedia control was outlined in the ‘context’ section (page 1).  With respect to giving a composer control of a network of multimedia objects, these projects failed in a number of areas, and left a ‘gap’ which this project sought to investigate (see page 23). 

This gap, however, was too large for this project to manage, and, as such, I decided to focus on a specific number of tasks, as outlined on page 26.  There is still, therefore, a number of areas which require further work.  These outstanding issues form the basis for follow-up work, and are outlined below.

Benchmarking

This project failed to carry out any empirical tests on the speed of the various distributed communication techniques.  Measuring the time it takes to establish an initial connection, send the first message and send subsequent messages would provide a sound basis for comparing the communication techniques.

These tests should be carried out using sockets, RMI and ORB-based communication, running under a range of operating systems and computer architectures.

User Friendly Scoring

The scoring techniques used throughout this project was to write a client program which controlled a number of remote output objects.  In the case of the Three Blind Mice example, this program iterated through an array of data elements.  In the case of the Minim example, the client program ran through a very long list remote method calls.

Neither of these methods are in any way user-friendly, and treat the composer as a programmer.  An appropriate (graphical) user interface, such as those seen in popular MIDI sequencing programs (such as Cakewalk or Cubase) would greatly increase the usability of the system.

Input Objects

As stated on page 26, this project was based on pre-written scores controlling multimedia output objects.  Considering the use of input objects would greatly increase the potential of the system, and would allow composers to work with dynamic performances rather than static compositions.

An input means would be encapsulated as an object, which would communicate with one or more output object.  The input object could be used in parallel with a score or could replace a score entirely.

Multiple Users

At present, a single client score is executed and controls a number of multimedia output objects.  There should be no reason why two client scores are executed, providing they keep in time with each other (if this is necessary).

This concept becomes more interesting when the client programs are somehow under the real-time control of the composer/performer (“user”) or if input objects are used.  In this case the user may adopt a particular role in the performance, such as a ‘percussionist’ or a ‘pianist’.  However, each user will have access to all the multimedia output objects present on the network.  This would imply some sort of locking mechanism to ensure that two users are not able to assume control of the same output object simultaneously.

Portability

Imagine that a composer has created a multimedia composition using the multimedia output objects presented by a network of computers he/she has in his/her studio.

Now imagine that the composer puts the score onto a floppy disc and takes it to a friends studio.  This friend also has a network of computers, but which presents a completely different set of multimedia output objects.  The client program will fail to locate the desired server objects and will exit.

Integrating portability into the system would mean that when the composer attempts to play the client score at the friends studio, a set of output objects are chosen which can be substituted in place of the output objects chosen in the composer’s studio.

Methods of achieving portability include:

·        Using inheritance/polymorphism

·        Establishing a reference to a remote object by specifying an interface rather than a name

·        Establishing a reference to a remote object manually, which is automatically remembered the next time the system is faced with such a decision.

JINI

JINI, outlined on page 13, would enable a multimedia device or object, when connected to a network, to declare it’s presence and make itself available for other objects to use.

This would greatly simplify the process of starting and configuring a network, and would allow multimedia objects to be added or removed on-the-fly.

Object Pipelines

As mentioned in the candidate technology section on page 7 many electronic music systems make use of a ‘pipeline’ model, where a signal is generated by one object, modified by another object(s) before presumably arriving at some audio output.

Distributed object technology has opened up the possibility of applying this idea to the system developed throughout this project.

Using the system as it stands, a client score sends an instruction to a multimedia output object, which realises it.  There is no reason why the client should not send the instruction to an intermediate object which somehow transforms the instruction before forwarding it to a multimedia output object (or perhaps, another transforming object).

Enabling this kind of functionality would require a protocol whereby a client can specify the path taken by a message.

References

[Cocker, 1989]                        Rick Cocker (1989).  “Minim”

 

[Coulson, 1998]                      Geoff Coulson (1998).  "A Distributed Object Platform Infrastructure for Multimedia Applications", Distributed Multimedia Research Group, University of Lancaster

                                               ftp://ftp.comp.lancs.ac.uk/pub/mpg/MPG-98-04.ps.Z

 

[DCE, 1996]                           The Open Group (1996).  "DCE Overview"

                                               http://www.opengroup.org/dce/info/papers/tog-dce-pd-1296.htm

 

[Denckla, 1999]                      Ben Denckla and Patrick Pelletier (1999)  “Rogus McJava”

                                               http://theremin.media.mit.edu/rogus/

 

[DSTC, 1]                               Distributed Systems Technology (1999).  “DSTC RM-ODP Information Service”

                                               http://www.dstc.edu.au/AU/ODP/

 

[Foote, 1998]                          Bill Foote (1998).  "Proposed Java Real-Time Extension Specification, version 0.1"

                                               http://www.sdct.itl.nist.gov/~carnahan/real-time/sun/

 

[Hong, 1998]                           Prof. J. Won-Ki Hong (1998).  “The MAESTRO Project”

                                               http://dpe.postech.ac.kr/maestro

 

[Horstmann, 1997]                  Markus Horstmann and Mary Kirtland (1997).  "DCOM Architecture", Microsoft Corporation

                                               Available from http://msdn.microsoft.com/library/backgrnd/html/msdn_dcomarch.htm

 

[Inferno, 1]                              "Inferno User's Guide Section 1", Lucent Technologies

                                               http://www.lucent-inferno.com/Pages/Developers/
Documentation/R2.3/intro23a.PDF

 

[Inferno, 2]                              "Inferno User's Guide Section 5", Lucent Technologies

                                               http://www.lucent-inferno.com/Pages/Developers/
Documentation/R2.3/SysOver23a.PDF

 

[JavaVision, February 1999]    JavaVision periodical, published by IBC.

[JavaVision, March 1999]

 

[Jensen, 1996]                         Doug Jensen (1996).  "Real-Time Manifesto"

                                               http://www.realtime-os.com/rtmanifesto/rtmani_2.html

 

 

[Krause, 1999]                        Jason Krause and Rob Guth (1999).  "Sun Turns Java Into Jini", The Standard

                                               Available from http://thestandard.net/articles/display/0,1449,1097,00.html?related.1449

 

[Marsanyi, 1999]                     JavaMidi version 4 by Robert Marsanyi.
http://www.softsynth.com/javamidi

 

[Nilsen, 1999]                         Kelvin Nilsen (1999).  "Consensus effort of various members of the Real-Time Java Working Group, revised 15-Jan-1999"

                                               http://www.newmonics.com/webroot/rtjwg.html

 

[ISO, 1]                                  ITU-T, Open Distributed Processing Reference Model

                                               http://www.iso.ch:8000/RM-ODP/

 

[Oliver, 1998]                          Huw Oliver, Christopher Edwards, Frederic Dang Tran, Jean-Bernard Stefani and David Hutchison (1998)  "Supporting Real-Time Multimedia Applications with Distributed Object Controlled Networks", Distributed Multimedia Research Group, University of Lancaster

                                               http://ftp.comp.lancs.ac.uk/pub/mpg/MPG-98-12_ps.gz

 

[OMG, 1996]                          Object Management Group (1996).  "The Common Object Request Broker: Architecture and Specification revision 2.0", Object Management Group

                                               http://www.omg.org/corba/corbaiiop.html

 

[OMG, 1998]                          Object Management Group (1998).  "CORBAtelecoms: Telecommunications Domain Specifications version 1.0", Object Management Group

                                               http://www.omg.org/library/ctindx.html

 

[OMG, RTSIG]                       Real-Time PSIG Home Page
http://www.omg.org/homepages/realtime

 

[Sun, 1998]                             Sun Microsystems (1998). "JSR-000001 Real-Time Extension Specification"

                                               http://developer.java.sun.com/developer/jcp/jsr_real_time.html

 

[Sun, JavaSound]                     JavaSound API Home Page
http://java.sun.com/products/java-media/sound/index.html

 

[Sun, JMF]                              Java Media Framework FAQ
http://java.sun.com/products/java-media/jmf/forDevelopers/jmffaq.html

 

[Vogel, 1998]                          Andreas Vogel and Keith Duddy, (1998).  "Java Programming with CORBA", John Wiley & Sons, Inc.

 

[Williams, 1998]                      Al Williams (1998).  "Jini: The Universal Network?", Web Techniques, Volume 3, Issue 3.

                                               Available from http://www.webtechniques.com/features/1999/03/williams/williams.shtml

 



[1] This isn’t quite true - the only reason a kettleDrum class is created is because it sounds different from a drum class.  Therefore the hit method of a kettleDrum class would actually need to be re-written to produce a kettle drum sound instead of a generic drum sound upon execution.

[2] It would appear, however, that many aspects of CORBA have been designed with the C++ language in mind.

[3] (When the program is compiled, the anonymous class appears as the file Main$1.class - see /tutorial/java/javaOO/innerClassAdapter.html for more information on anonymous classes.)

[4] See the Java Tutorial - \tutorial\essential\exceptions\index.html - for more detail on checked exceptions.