My Eleven Demonstrations are a sequence of tutorial examples illustrating how computer programs can be coded to generate musical compositions. They were the practical component of a course on automated composition which I taught during the early 1980's at the State University of New York at Buffalo.
As it happened the only accomplished performer enrolled that semester was James Perone, who at the time was pursuing graduate studies in clarinet performance and musical theory. Jim was also played frequently with the S.E.M. Ensemble when that organization was located in Buffalo.
After my typescript was rejected for book publication, I sought other means to get its content out there. Around the Eleven Demonstrations I attempted to organize a road show jointly with Jim Perone. We got exactly two takers. The first taker was Otto Laske on behalf of the New England Computer Music Association in Boston. Our lecture/performance was given alongside presentations by Christopher Fry and by David Levitt. The second taker was Barry Truax on behalf of the 1987 ICMC in Vancouver. Truax gave us a featured spot as a conference tutorial. On that occasion we presented only Demonstrations 7-11. Presentation time was limited; I wanted to focus on advanced techniques; also, these later Demonstrations had the most musical depth. A writeup of the tutorial appeared in that year's ICMC Proceedings.
As I have explained elsewhere, the Eleven Demonstrations are contrived from just as much music-theoretical grounding as was needed to get things accomplished. These pieces treat rhythm in only the most elementary way. Meter exists in these examples only as a timing aid for the player. There are no strong or weak beats. Dynamics are held constant. Articulation is exclusively decorative dealing solely with spacing between notes or whether notes are tongued or slurred. (This is a feature of these Demonstrations, not of my theoretical perspective generally.)
The purpose of these Demonstrations is to illustrate practically how various automated compositional techniques can be integrated into working programs. In preparing a hypertext edition of this Demonstrations text, I could have taken several approaches.
What was needed were explanations that teased out particular strands of code — strands which reveal the mechanics of each featured technique in play at selecting particular musical attributes. This means identifying the data elements used to store materials and associated control factors. It means pinpointing configuration tasks, which might happen at the top of a program or which might occur when section parameters change. It means tracing through specific actions taken at the decision point — how some actions exemplify the design pattern and how some exceptions prove the rule. It might also mean examining follow up actions which permit the outcome of one decision to influence future choices.
These requirements, though consistent with the original purpose of my Demonstrations, where generally not satisfied by the original typescript. In the end, I opted to conform the in-line text to these requirements. That meant extensive revisions; however, I have tried to limit the new explanations to facts that were known to me in 1984. Where techniques have been superseded by later improvements, I have isolated such comments within footnotes. These notes give primary emphasis to my most current publications; they reference back to the typescript only for last resort. Thus readers should not need to dig into the original typescript unless for scholarly purposes.
Over the years I have evolved a model of how composing programs are structured. This model owes much to others, especially to Gottfried Michael Koenig, though the jargon I use is mostly of my own coinage. I have recently formalized this model as a Java framework though I had been using its terminology since the '80s. The model goes like this:
I introduced “stages of production” in the article describing the comparative search techniques used to compose Protocol for solo piano (see especially the stage-dependency graph on page 218). The distinguishing feature of a stage is that output from one stage can be captured to file and evaluated before this output becomes input for a future stage. Thus a stage is equivalent to a software pass. Early stages might include the activities which American serial composers characterize as precompositional and which Iannis Xenakis” characterized as “outside time”. Middle stages might include the generation of material; e.g. chords or motives. Of the Eleven Demonstrations, numbers 1-6, 8, and 9 produce musical output in one pass. Demonstration 7 and Demonstration 10 each implement three stages. For Demonstration 7 the stages are: I. part writing, II. chordal rhythm, and III. arpeggiation. For Demonstration 10 the stages are: I. chord generation, II. chord progression, and III. arpeggiation. Demonstration 11 implements four stages: I. material, II. form, III. rhythm, and IV. rhythm.
My characterization of a problem appears on the opening page (45) of the article describing the constrained search techniques used to compose Gradient, another work for solo piano. Here the composing program is said to address
… compositional problems that involve some collection of entities (e.g. sections, chords, notes) and that attempt to select an attribute (e.g. durations, registers, pitches) for each entity. In describing these techniques, it will be convenient to refer to individual acts of selection as decisions and to refer to a collection of decisions, one for each entity, as a solution to a problem.
My present framework identifies an act of selecting a specific option (supply element) for a decision as a choice. The current framework thus characterizes a solution as ‘a collection of choices, one for each decision.’
Demonstrations 1-6, 8, and 9 have heterogeneous decisions. For the most part, an outer loop selects attributes for phrases. Control than passes to inner loop which selects attributes for notes. The first stage of Demonstration 7 focuses in upon pitch selection, but here something magical happens: the order by which decisions are taken itself becomes subject to decision-making of a higher order.
Early dialects of FORTRAN, with their
GOTO branching and labeling by numbers
in columns 1-5, where notoriously difficult to read.
However the 1977 dialect of FORTRAN introduced block structures:
DO-REPEAT for loops
IF-THEN-ELSE-ENDIF for conditional choices.
These together with sensible indentation make for easy enough reading for programmers familiar with contemporary languages such as C++ and Java.
However reading FORTRAN offers some complications:
FORTRAN variable names are restricted to lengths from 1 to 8 characters, always starting with a letter. Variable names have default types. Variable names starting with I, J, K, L, M, or N default to integer types; names starting with any other letter default to real types. Simple variables do not need declaring unless you want to override these defaults. FORTRAN also supports character and logical variable types, but these types require explicit declarations.
FORTRAN represents conditional and logical operators using text abbreviations surrounded by periods:
|FORTRAN||C, C++, Java||FORTRAN||C, C++, Java|
FORTRAN counts like humans do. Loop indices range from 1 to N (not 0 to N-1). The expression
FOO(1) dereferences the first element
FOO. The expression
FOO(0) will cause a runtime fault, while if
FOO has dimension
FOO(N) will deference
FOO's rightmost element.
The only data structures made available by FORTRAN prior to 1977 are single-type arrays; FORTRAN '77 offered something corresponding to the C struct, but I did not employ this feature in Demonstration coding.
FORTRAN always passes subroutine and function parameters by reference — never by value. My Demonstration coding made use of this feature, so watch out for it.
For example suppose you have a subroutine
INC(K) which contains just the statement
Then if you set
J=3, then call
INC(J), the value of
J after returning from
will be 4.
For a second example suppose you have a subroutine
FLIP(VALUES, N) where
VALUES is an array
N elements. Suppose also you have an
N×N array named
FOO. Then you can pass the second
FOO to subroutine
FLIP by calling
|© Charles Ames||Page created: 2017-03-12||Last updated: 2017-03-12|