This is part four of a four-part series about my own odyssey as a sound composer/designer. It is an expanded version of the topics I presented at Invisible Places 2017, held in Ponta Delgada, Azores, in April 2017.
The previous three posts in this series dealt with various techniques I used in creating sound art, a sort of historical overview (though I continue to use some of those same techniques). This post is kind of a work in progress relating to my current thinking and approaches and is, thus, not fully formed. It deals primarily with form and content.
When one wants to consider the properties of sound the categories one chooses depend heavily on the context of the analysis.
A MUSICIAN, talking about musical sound, will think about:
Attack and Decay
A REORDING ENGINER for music may use the same categories but with terms that are more science-related than perception-related—virtually the same categories as the musician’s but expressed in objective scientific terms, not in terms of how sounds are perceived:
ANOTHER APPROACH WOULD BE TO DECONSTRUCT THE BASIC ELEMENTS OF A SOUND COMPOSITION:
- Force or system that causes sound
- The content of the sound
- The context of the sound
- The structure of the sound elements
An example for, say, a string quartet would be (using the above terms):
- People bowing strings
- Musical tones produced by the bowing
- The interactions with other instruments and notes in the quartet
- The organization of the tones as specified by the composer and performers
This approach could also work with a soundscape composition. For example, in the sounds of lines hitting sailboat masts that I described in two earlier posts, the elements might be:
- Wind causing the lines to blow into the masts
- The sounds produced by the lines hitting metal masts
- The total ambience of the boat marina
- The semirandom sounds produced as the wind shifts in direction and intensity
As I indicated in the first three posts in this series, I tended at first to separate music that was produced—even if generated by an algorithm—from documentary soundscapes.
My newest thinking evolved when I was asked to pair with a visual artist to produce an audio-visual piece about the Hudson River. Four composers were selected, and each paired with a visual artist to present two concerts at two art galleries in the Hudson River Valley.
The approaches of the various composer/video pairs were each unique, but most did rely on some actual sounds and images from the Hudson River. We were given a short poem about the Hudson River, “An Arrow Pointed Down,” by Sarah Heady. Some artists quoted the poem.
The Hudson is an arrow pointing down (though it flows both ways).
The City is a poured-concrete floor onto which all things land, and sometimes break.
You can hold—with your hands raised above your head, with a system of pulleys, with a net, standing on a ladder—your life and all its parts in the air.
But there is the fact of gravity.
As the poem suggests, the Hudson River flows both ways, north-south and south-north, according to the tides. My visual-artist partner, Lori Adams, and I decided to “go rogue” and produce images and sounds that were not particularly representational of the river itself. Lori decided to use a video technique known as “light painting” to create alternating moving and still-light patterns similar to the way the sun bounces off the river at various times, but completely nonrepresentational in terms of “documenting” the river. I decided to use the organizational principles typical of soundscapes but replace the documentary sonic elements with totally synthesized sound patterns that bore no real resemblance, most of the time, to the sounds of the river. After all, the sound artist Annea Lockwood had produced the definitive documentary CD of the sounds of the Hudson River. I wanted to branch out into uncharted territory.
This borrowed both from my algorithmic sound designs for theatre and my study of soundscape composition. This was my first attempt at this direction and there were limitations from the video art as well. The video was very episodic, so the quasi-soundscape needed to be so also. I developed a series of sounds virtually all created with analog synthesizers (sound from electricity), and, when the works were performed in two art galleries, I created the soundscapes in real time (though the video was prerecorded). The video was actually presented on two screens simultaneously. There were two pieces, each 7 to 8 minutes long. The YouTube clip below shows the single main screen and an example of my quasi-soundscape/quasi-music real-time sonic treatment.
This was a first experiment in combining the structure of soundscapes with totally nondocumentary sounds, but it has sparked my interest for a future project using this approach.
The structure of soundscape compositions is less architectonic than music, and the transitions can be either smooth or abrupt, according to the soundscape. Musical sounds have more fixed pitches, also, compared to musical performances.
My next project is to create one or more CDs with the series title “Soundscapes from Undiscovered Planets.” Here I propose to use the overall structure of soundscape art and a collection of possible “soundscape” elements from planets with different environments. But all the sounds will be created by electronics, none recorded from a real soundscape. As I finish pieces for this collection I will post them along with a more elaborate explanation of the structural differences between soundscape elements and musical ones, apart from documentary references.