The typescript for chapter 6 is provided in AutomatedComposition06.pdf.
Chapter 6 describes Markov chains, which are successions of random events, where the outcome of each event is influenced by one or more of the event's immediate predecessors. The chapter begins by explaining the chaining mechanism along with the implications of chain order: if each event is influenced solely by its most immediate predecessor, then the chain is said to be 1st order; if the influence extends back through the two most recent predecessors, the chain is 2nd order, and so forth.
The chapter next focuses upon properties of 1st-order chains: waiting counts and stationary probabilities. Then follows a survey of Markov processes used in composing programs. These include Hiller & Isaacson's Illiac Suite, Experiment 4, Iannis Xenakis's Markovian stochastic music, Hiller & Baker's Computer Cantata and David Zicarelli's Jam Factory. Special mention is given to the INTERVAL feature of Gottfried Michael Koenig's Project Two. Curtis Roads's PROCESS/ING program also receives acknowlegement here, although since Roads came to his concept by way of connection networks, he would perhaps not be happy with my classification.
Chapter 6 was converted into a language-independent presentation and sent along to Leonardo, who published it in 1973 as “The Markov Process as a Compositional Model: A Survey and Tutorial”. In that event, the only content offered by the typescript is FORTRAN implementation detail.
The selection model adopted in this chapter resembles Project Two to the extent that decisions pertaining to a specific musical attribute all draw from the same supply of options. Project Two contains no selection feature implementing Markov chains, though the INTERVAL chord-selecting principle does betray Markovian influences.
Suppose you have a sequence of notes and you want to assign a pitch to each note. Suppose further that you want to draw these pitches from the set E4, F4, G4, A4, B4, C5, and D5 and that you want consecutive pitches to obey the following constraints:
Notice in Table 1 that whenever a source pair of pitches proceeds by stepwise motion, then five target pitches are offered; otherwise, the target pitch can only be the stepwise resolution. In the former circumstance, one of the One more thing, you want two motions to be pitch either (1) to continue stepwise in the same direction or (2) to skip by a third in the opposite direction (escaping tone).
Markov matrices impose contraints against particular sequences of supply elements by the simple expedient of setting the corresponding transition probability to zero. Constraint satisfaction was historically a core subject in artificial intelligence. However, within a Markov matrix, the fact that a ‘source’ admits no targets means that when a chain reaches such a source, the chain ends. Under no other circumstances will the constraints inherent within the matrix bring about to any impasse.
The constraints and preferences just listed have been itemized in Table 1, which is technically distinguished as a 2nd-order Markov transition matrix. Suppose this process had previously selected Pk-2=A4 for note k-2 and Pk-1=G4 for note k-1. Then for note k:
The denominator in any given row is always chosen so that the probabilities sum to unity.
Now it happens that whenever you have a matrix like Table 1, where each state is able to communicate (directly or indirectly) with every other state, then it is possible to calculate steady-state probabilities. For example the steady-state probabilities for Table 1 are listed in Table 2.
This is the same distribution used to guide the weighted random selection scenarios described for Chapter 4 and Chapter 5, and indeed the present scenario provided the genesis for these values. The taperings off toward E4 and D5 reflect the fact that these pitches can only be approached from one side.
To generate a chain of pitches using Table 1, one would do the following:
I have coded a process which selects pitches according to the transition probabilities enumerated in Table 1 and which forces the initial choices of G4 for note #1 and A4 for note #2. Table 3 presents four sequences generated by this process using different random seeds. Each sequence has an accompanying statistical analysis. This analysis provides no insight other than establishing that the steady-state probabilities listed in Table 2 need many more than 48 samples to come into force.
|0||48||G4 A4 B4 G4 A4 F4 G4 E4 F4 D5 C5 B4 A4 G4 F4 A4 G4 B4 A4 F4 G4 E4 F4 C5 B4 A4 C5 B4 A4 G4 B4 A4 C5 B4 E4 F4 G4 A4 D5 C5 G4 A4 F4 G4 D5 C5 B4 A4||Counts||3||7||10||11||8||6||3|
|0||48||G4 A4 B4 D5 C5 B4 D5 C5 F4 G4 B4 A4 E4 F4 C5 B4 A4 G4 D5 C5 A4 B4 C5 D5 F4 G4 D5 C5 B4 A4 G4 C5 B4 D5 C5 E4 F4 D5 C5 A4 B4 F4 G4 E4 F4 A4 G4 F4||Counts||3||7||7||7||8||9||7|
|0||48||G4 A4 B4 E4 F4 C5 B4 A4 C5 B4 D5 C5 F4 G4 D5 C5 B4 F4 G4 E4 F4 G4 E4 F4 A4 G4 D5 C5 B4 G4 A4 D5 C5 F4 G4 E4 F4 A4 G4 B4 A4 G4 B4 A4 G4 F4 C5 B4||Counts||4||8||10||7||8||7||4|
|0||48||G4 A4 B4 D5 C5 B4 E4 F4 B4 A4 F4 G4 B4 A4 C5 B4 D5 C5 G4 A4 F4 G4 C5 B4 D5 C5 A4 B4 G4 A4 D5 C5 A4 B4 D5 C5 G4 A4 B4 D5 C5 G4 A4 F4 G4 A4 B4 E4||Counts||2||4||8||10||10||8||6|
The practical programming content of Chapter 6 was provided by Demonstration 4: Markov Chains, which used Markov matrices to select average durations, articulations, registers, and chromatic degrees.
|© Charles Ames||Page created: 2017-03-12||Last updated: 2017-03-12|