Published in Leonardo Music Journal, Volume 7, January, 1998, MIT Press, with companion cd recording of a piece composed with the techniques described.

An Information Theory Based Compositional Model

by Laurie Spiegel
May, 1997

Put simply, information theory[1] is a mathematical theory of how to optimize a signal for communication in a noisy channel and of how communication degrades in such a medium. My piece on the accompanying LMJ cd is an example of its musical use.

To understand its application, let's start simply by putting a basic melodic pattern of 16 pitches into a digital array from which they can be played as notes. On first hearing, it's pure information. Every note we hear is informative, unpredictable, not just confirming something heard before. Its entropy, or informational content, is high.

But it falls far short of being a musical composition. It lacks development, evolution, and form. Our sequence offers little pleasure and contains no means of conveying emotions through sameness and difference, anticipation, prediction, surprise, disappointment, reassurance, or return.

Next, repeat the pattern cyclicly. At first this feels more musical. But the longer we listen, the more boring it becomes. Our sense of anticipation grows as we wait for something more, for change, uncertainty, the unpredictable, the resumption of information. The average entropy over the whole "block" of this listening session continues to dwindle toward the infinitesimal with each additional repetition. It's completely redundant, providing nothing unforeseen. The data is reassuringly hard to miss or forget, but we're certain what each note will be before we hear it.

To make it function better as music, we can introduce noise, which cannot be predicted and therefore acts as information in this context by increasing the amount of uncertainty each note will resolve. We use it to raise the number of possible values each note could have from only one to 2, 3, several, or many. We program the random corruption of our pattern during its travel from safe unchanging storage array through the communications channels of software and sound to our ears and minds. It's more interesting now with greater entropy, but is it more musical?

Random corruption should not be confused with random generation. Many note generating algorithms just plug random number streams into aspects of sound. But even with elaborate filter logic, that's not the same. Noise is the replacement of explicitly defined information with random data at random times. It's the degradation of otherwise fully intelligible signal. In this case, we are using random noise in place of information to increase entropy, to counteract redundency. This is outside of information theory but works because music is self-referential and sensory rather than symbolic.

The next refinement of our model is based on the fact that not all components of a signal are equally vulnerable to noise. For example, quieter sounds are lost to noise before loud ones. The probability of losing any specific predetermined sound to noise depends on its individual characteristics and context, and this provides us with powerful compositional variables. These variables in sum give us direct control of musical entropy, of the amount of uncertainty that hearing each sound will resolve.

The logic for selecting sounds vulnerable to corruption is critical but simple, and for it, we continue to draw on information theory. Sounds containing less signal energy are more likely to be affected by noise, and conversely. In other words, it makes sense in light of nature that the quietest and shortest notes, and those poorest in harmonic (timbral) content (redundancy within spectrum), should be corrupted first. These are the notes most easily lost to radio static or environmental noise at a concert, and in traditional repertoire also tend to be less structurally important, like passing tones on weak beats versus downbeats or pedal tones.

Based on these various noise vulnerability factors, we can now create a master variable representing overall momentary entropy, and control it interactively in real time, perhaps turning a knob to increase or decrease the probability that each next pitch of our unchanging cycle will be randomly replaced by some other value. We can use this entropy variable to sculpt overall musical form, to manipulate tensions and expectations and thereby the listener's emotions. We can now compose more meaningfully.

If man has evolved music as a communications medium in a noisy world, it must embody reflections of other structures, including cognitive processes and natural sounds, and have built deeply into it's nature, among the many things that make it work, compensations for natural phenomena which would otherwise diminish its effectiveness. These must also be phenomena that follow natural laws, in this case Shannon's[2,3].

We should also consider the nature of the noise affecting our signal. Not all noise is created equal nor does it exist in a vacuum as a Platonic Gaussian ideal. Much of the "noise" that affects our ability to communicate is generated by something that makes sense in some other context. I consider randomness a relativistic phenomenon: any signal, no matter how internally consistent or meaningful it is within its own context, may be perceived as random noise relative to some other coherent signal.

So our noise source may be, for example, some other musical pattern, or even our own played asynchronous to itself, producing interference patterns which may exhibit their own structure, or it might be a weighted or contoured spectrum of data like a Fractal sequence or 1/f noise.

In order to generalize this compositional model beyond computer use, we might ask whether there actually is any such process as composition, and if so, what it is. We can ask how, whether, and why we distinguish the original creation of musical materials from their transformation. Instances of transformation abound, in arrangements, orchestrations, and variations, and there exist many standard techniques whereby new music is derived from old[4]. Instances of original creation also appear to be abundant, but are they really?

I would suggest that even in the inner realm of auditory imagination, what we interpret as spontaneous generation may be just the transformation of previously experienced material as it moves within the human perceptual and cognitive systems, informational channels in which it could well be vulnerable to the noise of our many coexistent memories and thoughts. If so, the application of information theory to music could have much broader productive implications.

I first learned about information theory at Bell Labs, Murray Hill in the 1970s, and found it an excellent and very musically useful model. I used FORTRAN implementations of such logic to compose computer-generated works throughout the 1970's and found it aesthetically workable. I have long wondered why information theory has not yet become more commonly used by others composing with computers. Perhaps this article will help to inspire more exploration of its few very basic but extremely powerful principles.

1. Pierce, John R., Symbols, Signals and Noise: The Nature and Process of Communication (New York: Harper & Brothers, 1961), reprinted unabridged and revised as An Introduction to Information Theory: Symbols, Signals and Noise (New York: Dover Publications, 1980).

2. Shannon, Claude E., "A Mathematical Theory of Communication", Bell System Technical Journal (1948), reprinted in Shannon and Weaver, The Mathematical Theory of Communication (Univ. of Illinois Press, 1949).

3. Shannon, Claude E., "Communications in the Presence of Noise", Proceedings of the Institute of Radio Engineers, Vol. 37, pp. 10-21 (1949).

4. Spiegel, Laurie, "Manipulations of Musical Patterns", Proceedings of the Symposium on Small Computers and the Arts, IEEE, pp. 19-22 (1981), or
Composer Laurie Spiegel has been writing and using compositional algorithms since the early 1970s. Information about and examples of her work are available on the web at

Copyright ©1997 Laurie Spiegel. All rights reserved.
Laurie Spiegel's Bell Labs page | Laurie Spiegel's writings page | Laurie Spiegel's home page