The XML schema for my ensemble-file format is specified in
Ensembles address the reality that indications in scores apply not just to single notes but to groups of
instruments. Consider tempo and dynamics. Tempo indications apply to all players, at least in most scores. Dynamics,
including crescendi and diminuendi, can apply to all the winds, to just the brass, or just to the horns. Thus, alliances between
instruments are flexible. They vary greatly from score to score, but they can vary also from moment to moment within the same
To achieve this flexibility the ensemble object model defines a collection of voices and a separate set of instruments. Alliances between voices are defined by grouping voices into choirs, and choirs can be nested into larger choirs. When a voice comes into effect, it takes on the score indications for the choirs that contain it, plus those for the ensemble as a whole. If an instrument plays while a certain voice is in effect, the instrument takes on the score indications for that voice. However, the same instrument can participate in multiple voices. Thus while voice alliances are fixed within a given score, instrument alliances can change.
Score indications are represented as contours, which track how the indication evolves over time. Contours were features of Leland Smith's SCORE program, developed during the 1970's as a preprocessor for MUSIC-N note lists; however contours are too ‘high-level’ for MIDI and for whatever reason were not incorporated into MusicXML. Each contour is implemented as a chain of segments, each defined by start time, starting value, end time, and ending value. Within the score framework described here, the tempo is defined globally for the ensemble. All other contours are user-defined within some choir. Dynamic contours are the most obvious that you will want to implement. However, you can also define custom contours; e.g. for pitch bend or timbre modulation.
The XML excerpt provided as Listing 1 blocks out the structure of an ensemble file:
contourelement. This contour is named “Tempo”, with reserved id #0 and a globally unique index, also 0.
choirinstances named “Melody”, “Harmony”, and “Bass”.
instrumentdeclarations: #1 “ViolinArco”, #2 “ViolinPizz”, #3 “CelloArco”, #4 “CelloPizz”, and #5 “Piano”
Many ensemble properties pertain specifically to external file formats, and these will be explained under later headings devoted to MUSIC-N, MIDI, and MusicXML property mappings. What remains are the default time signature, default key signature, and tempo contour:
Choirs are the grouping elements that define alliances between voices.
The relationship between choirs, voices, and contours was explained previously.
Listing 3 simply reprints the
choir elements presented
earlier in Listing 1.
voiceelements are highlighted in blue.
A voice is a component of an ensemble that represents a line of music performed by one player. Voices roughly correspond to MIDI channels and to MusicXML staves; though the granularity of a ‘voice’ is one degree finer. That is, more than one voice can play in one MIDI channel, and more than one voice can be plotted on a staff. If a score is destined for MusicXML export, then its voices are additionally required to be monorhythmic. The voice of a note provides the link to the score indications (contours) which pertain to the note. Each voice is scoped within a choir structure; however, the voice id has a global uniqueness requirement, so voices can also be displayed as a flat sequential list.
Listing 4 fleshes out the components of
The one component that is unspecific to external output formatting is the
This sub-element references a global
instrument declaration, which must in turn list
this voice as one the instrument is capable of playing (see below.
contourelements are highlighted in blue..
Like voices, custom
contour elements are declared within choirs.
Each contour has an
index property which is globally unique and which identifies the corresponding contour in
However two contours are allowed to share the same ID so long as they reside in independent choirs.
To understand why contour ID's may be shared this way, consider that you have separate velocity contours for a brass
choir and a woodwind choir.
It's less confusing if you can specify contour #1 for a velocity indication regardless
of whether the trumpet or the clarinet is currently playing.
All contour ID's above 0 are custom, but my own ensembles always use ID #1 for velocity control (dynamics).
Listing 5 provides the details used to configure “Velocity” contours for each of choirs
“Melody”, “Harmony”, and “Bass”.
Each contour uses the uses the LINEAR calculation mode and accepts
values ranging from 0 (inclusive) to 128 (exclusive), with a default value of 90.
This happens to be the range of MIDI velocities, which is adapted to MusicXML by the mapping detailed in Table 3.
maxExpValue properties in Listing 5 map the linear velocity range (0 to 128) to an exponential
range (500 to 10,000), which is suitable for MUSIC-N note lists.
Alert readers may notice that the
midi blocks a little further down each choir map the “Velocity” contours
to “Velocity” controls. However, there's no such thing as a Velocity control in MIDI! Rather, velocity and key number are two discrete
attributes of MIDI noteOn and noteOff events. For these particular events the key number comes from the note's onset pitch while the velocity
is evaluated as the contour value as of the note's starting time.
The “Melody” choir and its single like-named voice declare a second contour for pitch bend.
Values for the MIDI pitch-bend control range linearly over the 14-bit range from 0 to 16383, where the center value 8192 represents
no pitch deflection and the pitch-bend range is normally ±2 semitones. However the “PitchBend” contour in
Listing 5 is configured so that pitch deflections can be indicated in cents
or hundredths of a semitone. This is done by having the contour range from -200 (inclusive) to +200 (inclusive).
Later on when the “PitchBend” contour is linked to the “PitchBend” MIDI control, the
factor values are configured so that cents deflections will map properly to MIDI control values.
So much for MIDI, but what about MUSIC-N? Here the
maxExpValue properties map cents values
(-200 to 200) to an exponential range from 0.890898 to 1.122462. Understand that the upper limit of this exponential range is
6√2, while the lower limit is the reciprocal of the upper limit.
0 cents maps to unity, and 100 cents maps to 12√2 .
voicereferences highlighted in blue.
Instruments are the ensemble components which indicate how a note in a score should be played. Instruments operate quasi-independently of choirs, so they are declared separately as a flat sequential list. Like the voice id, the instrument id has a global uniqueness requirement.
Listing 6 fills out the instrument declarations for the ensemble originally blocked out in Listing 1. Notice that although instruments are separate from voices, they are still closely associated with voices. Thus both the “ViolinArco” instrument and the “ViolinPizz” instrument are capable of playing notes within the “Melody” voice. The fact that “ViolinArco” is designated as the default “Melody” instrument allows users of the Ashton score-building engine to proceed directly from voice selection to note creation without explicitly selecting an instrument.
Likewise both the “CelloArco” instrument and the “CelloPizz” instrument are capable of playing notes within the “Bass” voice; here “CelloArco” is the default. The remaining “Piano” instrument is capable of playing notes within both voices of the “Harmony” choir: “HarmonyTreble” and “HarmonyBass”. Since “Piano” is the only capable instrument for these two voices, “Piano” also serves as default instrument for each voice.
If the score is destined for conversion to MIDI, then the instrument will identify MIDI features such as the bank, program, volume, pan, and elevation. These MIDI properties are leveraged as well for MusicXML export.
If the score is destined for conversion to a MUSIC-N note list, then the ensemble instrument must be mapped to one ore more note-list instruments. The latter instruments identify which signal-processing configuration(s) will create the sound.
My original implementation of the ensemble concept assumed that each note in the native score structure would generate one note statement in a MUSIC-N note list. This assumption allowed me to scope the declaration of note-list parameters so that a parameter affecting all instruments could be declared globally (at the ensemble level), while parameters affecting specific voices or instruments could be declared locally.
The mission has now broadened to include MIDI and MusicXML file formats, and for these new formats the assumption of one native note to one output note still holds true. However experience using the Sound engine to synthesize speech sounds, has demonstrated that sometimes it is useful for a single native note to expand into multiple note-list statements. This was particularly true for fricative consonants and for the noise-burst components of plosive consonants.
This new implemention no longer scatters parameter declarations around ensembles, choirs, and instruments.
It is a two-layered scheme illustrated in Figure 1.
In this newer scheme, the ensemble instruments described previously
are shadowed by note-list instruments which are specific to the MUSIC-N output format.
These shadow instruments appear on the left side of Figure 1.
They are visible in the editor only when the MUSIC-N output format is enabled.
The note-list instrument declarations constitute the inner layer of the scheme.
They appear in Listing 7 as
noteListInstrument elements, which are included
noteList, itself directly under
noteListInstrument contains a list of
parameter is in turn described by an ID and a descriptive name.
The descriptive names can be cross-checked with a Sound orchestra file.
The outer layer of my newer scheme consists of the instrument-link structure illustrated on the right side of Figure 1.
Each ensemble instrument contains zero (no MUSIC-N output), one, or more instrument links, sorted into the order by which note statements
will be created.
The links appear in Listing 7 as as
instrumentLink elements, which can be located
instrumentLink contains a set of
parameterLink elements, one for each
parameter in the linked-from note-list instrument. The parameter links also describe how parameter values will be obtained. For examples:
Ensembles contain other special-purpose components to help map score data to MUSIC-N note lists.
These other components include the opcode, the note-list instrument and the statement terminator.
You don't define opcodes yourself; rather you select a note-list format which has a built-in set of opcodes along with a statement-termination option.
Listing 7 illustrates how MUSIC-N mappings may be configured for the ensemble originally
defined in Listing 7. Most MUSIC-N mapping information is enclosed in
elements which the listing highlights in blue. These can appear in three contexts:
The channel mode is not directly a property of a MIDI file but rather a way of writing the file. When the channel mode is Static, then the correlation between voices and channels applies. The Dynamic channel mode is available when each note needs to be subject to independent pitch bend.
Next topic: Score Data
|© Charles Ames||Page created: 2013-10-16||Last updated: 2015-06-30|