Chapter 6
Conditional Selection I: Markov Chains

The typescript for chapter 6 is provided in AutomatedComposition06.pdf.

Synopsis

Chapter 6 describes Markov chains, which are successions of random events, where the outcome of each event is influenced by one or more of the event's immediate predecessors. The chapter begins by explaining the chaining mechanism along with the implications of chain order: if each event is influenced solely by its most immediate predecessor, then the chain is said to be 1st order; if the influence extends back through the two most recent predecessors, the chain is 2nd order, and so forth.

The chapter next focuses upon properties of 1st-order chains: waiting counts and stationary probabilities. Then follows a survey of Markov processes used in composing programs. These include Hiller & Isaacson's Illiac Suite, Experiment 4, Iannis Xenakis's Markovian stochastic music, Hiller & Baker's Computer Cantata and David Zicarelli's Jam Factory. Special mention is given to the INTERVAL feature of Gottfried Michael Koenig's Project Two. Curtis Roads's PROCESS/ING program also receives acknowlegement here, although since Roads came to his concept by way of connection networks, he would perhaps not be happy with my classification.

Chapter 6 was converted into a language-independent presentation and sent along to Leonardo, who published it in 1973 as “The Markov Process as a Compositional Model: A Survey and Tutorial”. In that event, the only content offered by the typescript is FORTRAN implementation detail.

Pk
Pk-2Pk-1E4F4G4A4B4C5D5
E4F43/71/71/71/71/7
E4G41/1
E4A41/1
E4B41/1
E4C51/1
E4D51/1
F4E43/71/71/71/71/7
F4G43/93/91/91/91/9
F4A41/1
F4B41/1
F4C51/1
F4D51/1
G4E41/1
G4F43/93/91/91/91/9
G4A41/93/93/91/91/9
G4B41/1
G4C51/1
G4D51/1
A4E41/1
A4F41/1
A4G41/93/93/91/91/9
A4B41/91/93/93/91/9
A4C51/1
A4D51/1
B4E41/1
B4F41/1
B4G41/1
B4A41/91/93/93/91/9
B4C51/91/91/93/93/9
B4D51/1
C5E41/1
C5F41/1
C5G41/1
C5A41/1
C5B41/91/91/93/93/9
C5D51/71/71/71/73/7
D5E41/1
D5F41/1
D5G41/1
D5A41/1
D5B41/1
D5C51/71/71/71/73/7
Table 1: 2nd-order Markov
transition matrix for pitches.

Commentary

The selection model adopted in this chapter resembles Project Two to the extent that decisions pertaining to a specific musical attribute all draw from the same supply of options. Project Two contains no selection feature implementing Markov chains, though the INTERVAL chord-selecting principle does betray Markovian influences.

Suppose you have a sequence of notes and you want to assign a pitch to each note. Suppose further that you want to draw these pitches from the set E4, F4, G4, A4, B4, C5, and D5 and that you want consecutive pitches to obey the following constraints:

Notice in Table 1 that whenever a source pair of pitches proceeds by stepwise motion, then five target pitches are offered; otherwise, the target pitch can only be the stepwise resolution. In the former circumstance, one of the One more thing, you want two motions to be pitch either (1) to continue stepwise in the same direction or (2) to skip by a third in the opposite direction (escaping tone).

Markov matrices impose contraints against particular sequences of supply elements by the simple expedient of setting the corresponding transition probability to zero. Constraint satisfaction was historically a core subject in artificial intelligence. However, within a Markov matrix, the fact that a ‘source’ admits no targets means that when a chain reaches such a source, the chain ends. Under no other circumstances will the constraints inherent within the matrix bring about to any impasse.

The constraints and preferences just listed have been itemized in Table 1, which is technically distinguished as a 2nd-order Markov transition matrix. Suppose this process had previously selected Pk-2=A4 for note k-2 and Pk-1=G4 for note k-1. Then for note k:

Notice that

1/9 + 3/9 + 3/9 + 1/9 +1/9  =  9/9  =  1

The denominator in any given row is always chosen so that the probabilities sum to unity.

Now it happens that whenever you have a matrix like Table 1, where each state is able to communicate (directly or indirectly) with every other state, then it is possible to calculate steady-state probabilities. For example the steady-state probabilities for Table 1 are listed in Table 2.

PitchE4F4G4A4B4C5D5
 Probability 0.0850.1550.1740.1730.1740.1550.085
Table 2: Steady-state probabilities for states
listed in Table 1.

This is the same distribution used to guide the weighted random selection scenarios described for Chapter 4 and Chapter 5, and indeed the present scenario provided the genesis for these values. The taperings off toward E4 and D5 reflect the fact that these pitches can only be approached from one side.

To generate a chain of pitches using Table 1, one would do the following:

  1. Initialize by choosing one of the pitch-pairs listed in the leftmost two columns of the table. For purposes of demonstration, let's choose two central pitches in the supply: G4 for note #1 and A4 for note #2.
  2. To select a pitch for note #3, populate an urn according to the probabilities listed in the G4-A4 row: one ball marked E4, three balls marked F4, three balls marked B4, one ball marked C5, and one ball marked D5. Mix the balls blindly, draw one out, and assign the pitch name on the ball to note #3. For purposes of demonstration, assume that the B4 ball was drawn.
  3. To select a pitch for note #4, populate the urn according to the probabilities listed in the A4-B4 row of Table 1. Mix the balls blindly, draw one out, and assign the pitch name on the ball to note #4. For purposes of demonstration, assume that the G4 ball was drawn.
  4. To select a pitch for note #5, consult the probabilities listed in the B4-G4 row of Table 1. The only probability populated in this row is the A4 choice. You could go through the motions of emptying out the urn, populating it with one ball marked A4, and drawing that ball back out again, but why bother? The choice for note #5 is determinate.
  5. For each remaining note, locate the row in Table 1 corresponding to the two previous pitches, then use the probabilities listed in the row to select a new pitch.

I have coded a process which selects pitches according to the transition probabilities enumerated in Table 1 and which forces the initial choices of G4 for note #1 and A4 for note #2. Table 3 presents four sequences generated by this process using different random seeds. Each sequence has an accompanying statistical analysis. This analysis provides no insight other than establishing that the steady-state probabilities listed in Table 2 need many more than 48 samples to come into force.

PositionLengthContentSummaryE4F4G4A4B4C5D5
048G4 A4 B4 G4 A4 F4 G4 E4 F4 D5 C5 B4 A4 G4 F4 A4 G4 B4 A4 F4 G4 E4 F4 C5 B4 A4 C5 B4 A4 G4 B4 A4 C5 B4 E4 F4 G4 A4 D5 C5 G4 A4 F4 G4 D5 C5 B4 A4Counts371011863
Proportions0.06250.1460.2080.2290.1670.1250.0625
048G4 A4 B4 D5 C5 B4 D5 C5 F4 G4 B4 A4 E4 F4 C5 B4 A4 G4 D5 C5 A4 B4 C5 D5 F4 G4 D5 C5 B4 A4 G4 C5 B4 D5 C5 E4 F4 D5 C5 A4 B4 F4 G4 E4 F4 A4 G4 F4Counts3777897
Proportions0.06250.1460.1460.1460.1670.1880.146
048G4 A4 B4 E4 F4 C5 B4 A4 C5 B4 D5 C5 F4 G4 D5 C5 B4 F4 G4 E4 F4 G4 E4 F4 A4 G4 D5 C5 B4 G4 A4 D5 C5 F4 G4 E4 F4 A4 G4 B4 A4 G4 B4 A4 G4 F4 C5 B4Counts48107874
Proportions0.08330.1670.2080.1460.1670.1460.0833
048G4 A4 B4 D5 C5 B4 E4 F4 B4 A4 F4 G4 B4 A4 C5 B4 D5 C5 G4 A4 F4 G4 C5 B4 D5 C5 A4 B4 G4 A4 D5 C5 A4 B4 D5 C5 G4 A4 B4 D5 C5 G4 A4 F4 G4 A4 B4 E4Counts248101086
Proportions0.04170.08330.1670.2080.2080.1670.125
Table 3: Four pitch sequences generated using the transitions detailed Table 1 with different random seeds.
The first two choices are forced to G4 and A4 in each sequence.

Demonstration

The practical programming content of Chapter 6 was provided by Demonstration 4: Markov Chains, which used Markov matrices to select average durations, articulations, registers, and chromatic degrees.

© Charles Ames Page created: 2017-03-12 Last updated: 2017-03-12