Warren Burt

. "VoiceChordsII with "Dedication" by Rewi Alley" from "Antipodean Collection"
. "Some Physical Virtual Sensuality" from "Antipodean Collection"
. "Brisbane Nocturne" from "Antipodean Collection"
. "Scraps from the Lab Floor" from "Ear-Cleaners"
. "64 Golden Chords" from "Ear-Cleaners"

Warren does lots of things. He makes music, produces radio shows, makes videos and films, does performance art, writes criticism, builds instruments, organizes community events, teaches privately, occasionally teaches inside an institution, writes software, and a few other things. Central to all these activities is a concern with the body, and performance, and real-time.

"I started working with electronic music in 1968, working primarily with analog synthesizers, and except for a couple of pieces, working in friends' studios. I didn't really work with computers and "computer music" until 1982, when I bought an AIM-65 single board micro that I could own myself, and that I could program to perform in real time with. My concern has always been with affordability, accessibility, and performability.

But while I'm concerned with accessibility of equipment, I'm not in any sense concerned with "aesthetic accessibility." That is, I think that if one has in one's head a set of rules or ideas that divide artistic material into "accessible" and "inaccessible", then I feel that the game is pretty well lost already, and one's ability to explore, probe, and find things out will be severely hampered. A parable for this for me is the recent closure of the Music Department at La Trobe University in Melbourne. For 15 years, the Music Department followed the Administration's orders - every time a budget cut came through, or a request for restructuring the course to fall more in line with the current administration's vision of what a course should be, they complied. They bent over backwards obeying more and more draconian orders. In the end, as a reward for their obedience, they were shut down. This showed me that the reward for following orders is to be done away with. I think this metaphor can be extended from academic administrators to almost every other aspect of the arts. So, since you're going to be rejected if you follow societal or aesthetic or stylistic orders, you might as well do what you like, anyway - the only true artistic satisfaction is self-satisfaction. (This should be taken to apply only to artistic activities, and not things like, for example, murder, or bank robbery. I definitely believe in a social contract, where we all behave "properly and politely" so that we all can get on with what we do! I'm not going to go as far as Alestair Crowley and say "do what thou wilt shall be the whole of the law!)

So my work does not usually conform to "commercial" or "common-practice" models. There are enough people out there doing that already. Although some of my work is very funny, I don't regard myself as essentially an entertainer. I think of myself as an explorer, who shares the results of their explorations with like-minded and curious friends and acquaintances.

With that said, sometimes my work is extremely pretty - if the processes I'm dealing with lead me in that direction, then I'm quite happy to live in a beautiful artistic universe for a while. An example of this would be my piece "VoiceChords II, with "Dedication" by Rewi Alley." Rewi Alley was a New Zealand poet (1897-1987) who spent the last 60 years of his life in China. In Springfield, New Zealand, there is a memorial to him. In the memorial, if you press a button, you can hear an actor read the first half of his 1948 poem "Dedication." I did, and recorded the result, with school children and a truck in the background. I then played the recording through a computer sound modification patch. The patch had very resonant filters processing the voice, so that, if there were the frequencies in the voice that the filter was tuned to, the filter would ring at that frequency, making a glowing burst of sine-wave to accompany the voice, and to be in tune with it. Since Alley was a socialist realist poet, I felt the accompaniment should reflect the musical values of socialist realism, and so I tuned the filters to a progression that a socialist realist composer such as Copland might have approved of. In performance, I have 12 buttons on a midi controller (they could be keys, as well), and these 12 buttons select 12 different tunings of the filters - the chords of the progression. Once I set the recording of the poem in motion, I then press the buttons, performing a traditional mid-20th century tonal harmonic progression to accompany Alley's beautiful mid-20th century poetry. The software used for the piece was AudioMulch, Ross Bencina's wonderful sound processing tool kit.
(www.audiomulch.com). In the screen shot of the software (above), you can see the 5combs filter, the phase shifter, and the sound spatialiser. The last two work fairly automatically, the 5combs filter I change in performance with my midi controller. With AudioMulch, any control you see on the screen can have midi control or internal program automation applied to it, so you're not limited to just one control with a mouse at a time. You can perform multiple controls simultaneously should you have a controller that can generate them. I have such a controller (the Peavey PC1600x), and I often perform with the computer off to one side, so that the only visible electronic music equipment is my controller, which looks something like a small mixer. My dream of computer music performance would be to be able to perform someday with no computer physically present at all!

My least favourite electronic arts performance visual is the sight of a man (and usually they are, sigh...) with his head buried in his laptop. I've tried all sorts of strategies to get around this. Some photos illustrate this. In the first (left), my head is indeed buried in the laptop, but the laptop is surrounded by toy penguins, and I'm performing with the wonderful Melbourne-based improviser and composer David Tolley. This is from an April 1998 performance we did at Melbourne's Theatre of the Ordinary. There is plenty of physical interaction between us, and we're constantly giving body language cues to each other. The sound I was playing was samples of Tolley's electric cello, which I modified in real time, again using AudioMulch. In fact, in this improv, we got into a bit of a loop - my computer modifications of his sound made him play faster and wilder, which made me change the modifications on
my machine more quickly, which made him play more wildly, etc. At the end of the improv, we were both quite winded!

In the second photo (missing pic - ed.), I'm performing in the observation tower at Melbourne's Ripponlea Estate - an historic house and garden. This was part of the Recent Ruins exhibition of sound installations held at Ripponlea by the Contemporary Music Events Company in November 1999. In this performance the laptop is on my lap, and it's connected to a tiny amplifier (outside of the photo) which distorts the string instrument samples I'm using so that they sound like a Japanese koto. In fact, the music I'm playing is tuned in a series of ancient Greek modes which, to the contemporary ear, do indeed sound Japanese! I'm also reading from a book of Chinese poetry of the Ming and Ching dynasties. So here, the combination of performance in an outdoor environment, live reading, and a tiny amplifier removes the piece from the alienating heroics of the computer performer in the club, and renders things a bit more physical and human-scale.

In the third photo (right), the laptop is on the floor, playing very sharp spiky sounds controlled by SoftStep software (more about that later), while choreographer Anne O'Keeffe and myself move small high-quality loudspeakers at different angles, creating different echoes, so that the way sound outlines the physical dimensions of a space can be heard by an audience. This performance occurred in late 1998 at Theatre of the Ordinary, Melbourne.

The fourth photo (left below) is of an installation work of mine, "Installation for Three Laptops" which took place at Monash University, Melbourne, as part of the "First Iteration Conference" in 1999. Here, the laptops themselves are the focus. They are mounted on plinths, along with the loudspeakers, and toy penguins and other soft toy animals. The laptops are in a feedback loop. That is, information from laptop 1 goes into laptop 2, which responds and sends information to laptop 3, which responds and sends information to laptop 1, etc. The result of this feedback loop is heard as flurries of microtonally tuned electric-piano notes. Imagine 3 free improvisation electric piano players playing softly, yet at top speed, and you'll get the idea. The installation is purposely set up to look like a department store display. Both the cute toy animals, and the commercial style installation are designed to emphasize the consumer-level culture that produces the tools we work with. The sounding results, which are anything but commercial, emphasize how other potentials can come out of this. And occasionally, I'll even dispense with the presence of the computer altogether! In the fifth photo, I'm performing on my copy of Harry Partch's "harmonic canon" instrument, while dancer Vanessa Case, off stage, mixes sound made by the computer to accompany me. The music for this, although quite pastoral, was in 23 tones per octave, not the usual 12, so that I could have a richer harmonic palette than the Western 12 note scale offers. This photo (right below) is from a November 2001 performance at Theatre of the Ordinary.

In the sixth and seventh photos (below, left & right), Vanessa Case and myself are performing live while computer made sounds (a series of chords made up of all piano samples of all 12 Western notes - that is, each chord had all 12 tones in it, spread over many octaves) accompany us. My persona for this performance was that of a stodgy academic, whose partner was desperately trying to reestablish a connection with him. The failure of these attempts to re-establish connection provided the piece with much of its hilarity. Again, this is from a November 2001 series of performances at Theatre of the Ordinary.

Musically, I'm very involved in setting up real-time processes, and then controlling them in performance, shaping the output of the machine, as it presents me with options and choices. If you like, I program the machine to be like an improvising partner, and together we shape the final music. In my piece "Some Physical Virtual Sensuality", I control a computer while talking about the issue of the physicality of computer performance. I have my finger on the mouse pad of the computer, and movements along one axis of the touch pad produce changes in frequency modulation and a vowel filter (a filter that produces spectra that resemble human vowels), while movements in the other direction produce pan and volume changes. To make this piece, I used Martin Fay's synthesizer program Vaz Modular (www.software-technology.com) controlled by SoftStep (http://algoart.com). I find the combination of Vaz Modular and SoftStep to be very powerful indeed. In the screen shot of the Vaz patch for this piece (below), you can see the Vowel Filter, with "female singing" and a variety of vowels selected, and you can also see how I'm controlling the frequency of Oscillator 1, which is then frequency modulating Oscillator 2. It's the combination of frequency modulation and vowel filtering that gives the piece its whiny, nasal sound.

As well as being involved in physicality, I'm also interested in a number of other areas of inquiry. The intersection of mathematics and music fascinates me. Number patterns heard as melodic material, as tunings, as timbres, all pique my curiosity. Many of my pieces begin with the question: "I wonder what it would sound like if....." In the days of analog synthesizers, I would produce labyrinthine control patches, setting up processes that drove themselves in most intricate ways. About three years ago, I became involved with John Dunn's SoftStep project. SoftStep is a Windows based algorithmic music software designed by John Dunn of Algorithmic Arts of Fort Worth, Texas, with input from a number of friends around the globe, including me. In its current state, I think it's one of the most interesting, user-friendly, and affordable control environments for the contemporary computer music composer. It has a myriad of functions for all sorts of fractal and other mathematical functions to be applied to any aspect of music you desire. "Brisbane Nocturne" was a piece I made in1999 designed to show off many of the functions of SoftStep as they existed at the time. Somewhere along the line, the piece went from being a demonstration of functions into being a piece of music. In fact, I recommend listening to it with your eyes closed, and with no thoughts whatever for whatever equation is generating what aspect of the music. But for those with a structural curiosity, the screen shot of the SoftStep patch for "Brisbane Nocturne" will show a number of aspects of the program (below).

In the upper left hand corner of the screen is the Ball module. This generates rhythms based on the bouncing of a ball within a 4-walled space. Both horizontal and vertical motion of the ball can be externally controlled, too. Below that is a Fractal module, here generating a Mira fractal, the output of which is scaled to a range of 0 to 7, which is used to update the elements of a table of pitch values which are also fed by the Chaos module underneath it. The Chaos module is set to a function called "Burt Shift", which is simply a shift register feedback circuit that produces pseudo-random numbers. Also of interest is the Image module in the lower right, which reads pixel values from any 128x128 pixel image. In this case, the image was generated with the classic "James Gleick's Chaos" program. One could write a complete article about this piece (I have - it's in the Proceedings of the 1999 Australasian Computer Music Association Conference - http://www.acma.asn.au ), but perhaps just one more example would be useful here. In the middle bottom, there's a button called B-1, with a note below it that says "Add morse-thue melodies". When this button is pushed, a melody played by a rather bell like tone begins playing. It is controlled by one of the horizontal bar controls to its left, and produces the ascending melodies that occasionally appear in the piece. These melodies follow the output of the Morse-Thue equation, a chaotic
equation that produces patterns that have a large degree of self-similarity.

I'm also interested in mistakes, errors and glitches. My interest in these goes back to the early 1970s, when I played in a music/performance art group called Fatty Acid. We played violin, mandolin, and accordion - badly, and often with inappropriate repertoire. For example, one of our big 'hits' was the Prelude and Liebestod from Wagner's Tristan und Isolde, performed on our three instruments, right off the piano reduction. We played as well as we could, and we kept the rhythmic impetus of the piece alive. As a result, our performance was hilarious. If you knew the original, you could hear how we approached, and strayed away from it. If you didn't know the original, the result still had a rather surrealist stretched quality. With Fatty Acid, I became seriously interested in the effects of mistakes, hearing them not as deviations from a norm, but as sounds in their own right. And in their own right, some of the sounds produced were quite beautiful. Similarly in my work with technology, I've often pushed circuits or software beyond what they could do, or used them in unorthodox ways in order to find out what other sounds, besides those envisioned by the designer, could be produced.

In 1995, I began working with the now-obsolete DOS software "US". Written by Adam Cain, at the University of Illinois, and Dave Muller, at the University of Iowa, US was a very nice sound generating and modifying toolkit. It also allowed you to write your own extensions to it. I became intrigued with the idea of mangling the number files produced by a fast fourier transform function, and used US, and PowerBasic to do this. Simply, a fast-fourier transform function analyses a very short section of a sound, and produces a set of numbers (which can be stored in a file) which describe the frequency makeup of that short section of sound. If these numbers are then fed into an inverse fast-fourier transform function, the result should be your original sound. If, however, you change these numbers, the result will be your sound either changed slightly or radically, depending on how you changed the numbers. In order to get understandable results out of a fast-fourier transform, it is important that you do not change the format of the numbers, but just the actual numbers themselves. This, of course, is too much of a temptation for me, so I wrote programs, which, of course, juggled around the format of the fast-fourier transform numbers outrageously. Then I converted them back into sound. The result was the piece "Scraps from the Lab Floor." I think the total sound input into the piece was one sample each of the voices of myself, Harry Partch, and Kenneth Gaburo, a sine wave, and a brief snatch of some piano playing by Bill Evans. (Memory might fail me - there might have been another sample or two used, but not many.) All the sounds you hear in the piece, which I regard as a rather fun and high-spirited collage, are the results of my format-mangling programs. Sometimes the modifications are very slight. Sometimes, they're so radical that any resemblance with the original is totally lost. But like a mad scientist assembling "scraps from the lab floor", I put them all into this, my good-humoured sonic monster.

Another area of intense interest of mine is tuning. I've already mentioned a recent 23-tone to the octave piece of mine, and since first getting interested in tuning in the early 1970s, I've probably worked with most of the existing and speculative tuning systems on the planet. My friend, the music theorist Erv Wilson, sent me a chart in 1997, which described a number of ways that the function phi (the golden section) could be realized into chords. There were, in fact, 64 of these. So with an additive synthesis program written by my friend Arun Chandra, and with US, I made a piece with consists simply of all 64 of these chords one after another. The result is 64 chords made only of sine-waves, the purest of all possible sounds.

Curiously, the piece is, for me, fairly hard listening. The sine-wave timbres produce chords with very strange, non-acoustic instrument balances - often, the most prominent harmonics are at the top of the sound, rather than at the bottom, as would be the case with most acoustic instruments. The makes the timbres sound rather "uneasy", but it's an uneasiness that I'm more than happy to live with, and explore. Since these chords express the golden section, which is also found in the geometry of plant life, this piece was used as the soundtrack to a video piece, "64 Views of the Wetlands", which was incorporated into my 1998 solo political multi-media theatre piece "Diversity." "64 Views of the Wetlands" is just that - 64 fixed camera video shots taken by me of scenes in the Bittern Wetlands on Western Port Bay southwest of Melbourne. Each scene lasts the duration of one chord. The analogy is between the nature-like structure of the golden section, and the actual nature-produced structure of the wetlands, and the plants within it. As someone who is concerned with the environment, it was important for me to make a piece which put the environment right into people's faces, and didn't pretty it up into some new-age fantasy. The severity of the chords and the severity of the wetlands environment seemed a very effective combination in showing this more harsh, unforgiving side of nature and its structuring.

Warren Burt
PO Box 2154
St. Kilda West, Vic. 3182