This is part two of a four-part series about my own odyssey as a sound composer/designer. It is an expanded version of the topics I presented at Invisible Places 2017, held in Ponta Delgada, Azores, in April 2017.
PART 2—ALGORITHMS AND SAMPLES
In part one of this series I discussed becoming interested in creating electronic sound compositions using electrical circuits that evolved into the analogue synthesizer. In those early days many sound composers like myself did not use a keyboard to make the synth a kind of organ, but rather used knobs, circuits, and sequencers to create music from electricity.
I described my own purchase of an EMS “Synthi” synthesizer and creating sound compositions in real time direct to tape. I also mentioned that I could feed signals from the synth to visual displays to create video art. I started creating moving logos for TV ads for local (at that time Omaha, Nebraska) advertising agencies.
That got me interested in the field of advertising, not so much using animated logos – I felt (incorrectly) those were a passing fad – but as I read the adverts given to me to work with I became convinced I could write better copy and design better ads (print, radio and TV). Thus I left my academic position and became a partner in a Midwest agency that ultimately had small offices in Sioux City, Omaha, and Des Moines.
I got out of electronic music for a while, but other events connected with my advertising career ultimately had great influence on my later work back in the sound and music field. The first of these events was the development of the tiny personal computer. I had a lifelong interest in computers but no real access, though I did have a second college major in Symbolic Logic (what is sometimes called “Mathematical” Logic). When I discovered in 1976 or so that one could actually own a small computer for a relatively reasonable price, I knew I had to have one. These early machines did not have much speed, memory, or bulk storage but they did allow an individual to understand computers without having to have access to a computer center. There was no real software one could buy so one had to learn to code. I learned BASIC, assembly language for the MOS 6502 processor, and ultimately Pascal (my first “structured” language).
These early machines did not really have the power for computer music synthesis so I shifted my interest to electronic “publishing,” the presentation and distribution of information via computer. (This was in pre-internet days.) The fields were called “Teletex” (for one-way information distributed wirelessly) and “Videotex,” wired interactive communication more like our use of the internet today. While working in advertising I met a man who was transmitting commodity prices to farmers via small computer terminals receiving data transmitted off the subcarrier of FM radio stations. This seemed like a great use of the personal computer technology. I actually tried to write code that simulated the kind of display that company was using.
I realized that this would be an important growth field that I wanted to be a part of, but I would need access to computer systems other than my early Apple II. I left my little agency and took an academic job at the University of Wisconsin at Stevens Point, where I taught advertising and developed a very early course in designing information for interactive media. This might have been the first course offered in that field in the U.S. for full college credit. The course was cross-listed in the course offerings of Communication and of Computer Science.
That led to my making a presentation at the National Computer Conference in around 1980. My presentation was about interactive information publishing, but at the end of the conference something happened that made me think again about sound design.
NOTE: Some of what follows is also included in my post DESIGNED RANDOMNESS here on the Audio Penguin blog.
The conference was in Anaheim, California. Having been recently married, my wife came with me, and at the end of the conference we decided to drive to Beverly Hills to visit some friends of ours in the entertainment industry. We stopped off for lunch at Marina del Rey. It was a windy day and as I stepped out of the car I was engulfed in a chorus of sound of ropes (“lines” sailors call them) hitting the aluminum masts of scores of sailboats docked at the marina. I was impressed and said to my wife, “Music should be like this.”
What I was reacting to was the beauty of the percussive sounds and the quasi-randomness in terms of structure. Most pop and classical music follows a specific form. This was not totally random as the tones were limited to the sounds of the masts and the direction of the wind, but it was certainly less structured than the kind of music I studied in music classes. I had not yet heard the term “soundscape” (see the next in this series for that), but I realized this offered its own kind of beauty, even if not a “tune.”
At the time, I did not know how I could create compositions using a similar free-form structure, but that came about eight years later with the marriage of the computer to digital music synthesizers.
The 1980s, characterized by the easy availability and increasing power of microprocessors, saw the development of a series of synths using digital technologies that could communicate with other digital devices (other synths, computers, keyboards, etc.) using a common communications system: MIDI (Musical Instrument Digital Interface).
There were several methods that different digital synths used to produce sound, but I was most impressed by the Kurzweil technology. That brand, and some others, used sampling to produce sounds. Brief sound clips were recorded (by the manufacturer and sometimes also by the user) that could be manipulated. The Kurzweil offered a considerable amount of signal processing so a sound could be modified extensively. This was very much the principle of early musique concrète experiments, just packaged up as a single instrument—more limited than musique concrète in the lab but more convenient as well for the composer.
I purchased a small Kurzweil synth. It actually had a keyboard as part of it, but I did not want to play it like an organ (even though this digital instrument could play chords, unlike many of the early analog synthesizers). I was reminded of the free-form structure of the sounds I heard in Marina del Rey, especially when I was introduced to the software package “M” created by David Zicarelli with initial design input from Joel Chadabe, John Offenhartz, and Antony Widoff.
This was interactive software where one could set up parameters, some fixed (as were the specific sailboat masts at Marina del Rey) and some partially, or even fully, random, more like the wind currents at Marina del Rey. I had no intention at this point to try to recreate the actual sounds of Marina del Rey, just to use that free-form partially random structure for music, rather than the formal structures of most “classical” music.
One early piece created in this fashion by me was Organicus, a dance piece performed by Sefa Jorques in the Merce Cunningham studio with a video background by Camilo Rojas. She asked for a piece that would remind people of birds, bugs, and small animals in the forest. The video reiterated these themes visually. Here is a short excerpt:
Music created in this algorithmic way did not have to sound like an atmosphere. An example is the opening theme music I created the same way for a radio program, Hudson Valley Worknet. Again, I did not play it on the keyboard but set up parameters in “M” to generate the music, parameters I could control in real time while the computer performed the music. None of this was recorded piecemeal in multitrack fashion. The Kurzweil actually contained 16 synthesizers, each of which could be set up with different parameters and controlled by different “channels” in “M,” all in real time.
At the time I was using this program, as well as others (such as Music Mouse written by Laurie Spiegel and some experiments with C-Sound and Supercollider), I became aware of the sound designs that the German sound composer Hans Peter Kuhn was doing for theatre productions of Robert Wilson.
When one reads about “sound design” in theatre books it is often concerned just with natural sound effects: crickets for a night scene, birds for a park scene, etc. But what Kuhn was doing for Wilson (in eight channels since he had a good budget) was creating sonic atmospheres that were not fully literal but that reflected the feeling of a place or situation. I realized that I could do this with the tools I was using for what I was calling, at the time, Designed Music.
I was soon asked to do sound environments for the New York premiere of the play If All Danes Were Jews, written by the famous Russian poet Yevgeny Yevtushenko. He came to the production and we shared a considerable amount of red wine together after.
Here I tried, in the spirit of Hans Peter Kuhn, to create a feeling of a place in a nonliteral fashion. Below are short excerpts from several scenes:
First, a dark graveyard at night where they were digging up a grave (excerpt):
Next a dank castle with mysterious “dripping” and spirit sounds:
There was a scene in which chess was played with large (three-foot) chess pieces. The sound design for this reflected the intermittent give and take of chess play:
In no way was I trying to reproduce the actual sounds one would hear in these environments. That would come later. For that, stay tuned for part three.
Finally, I was asked to create the sounds of African improvised folk music for use in a film. I set up an algorithmic process that accurately mirrored the structure of certain fold improvisations (which I found in recordings and books). As with all of the sound examples in this post, the sounds were created using a Kurzweil digital synth controlled by a computer running an algorithmic “intelligent music” program.
We will close on this music excerpt: