The word “distortion” is defined in the Merriam Webster dictionary as “the act of twisting or altering something out of its true, natural, or original state.”
In this post I want to discuss distortion in the context of audio. It is sometimes a bad thing and sometimes can be used creatively. Passing an audio signal through any kind of electrical device can produce distortion though sometimes it is so small as to be unnoticeable (such as an audio signal from a microphone or electronic instrument that might be slightly “distorted” just by feeding it through an audio mixer. But I will discuss distortions that are definitely noticeable.
THE UNWANTED (USUALLY).
Level Mismatch
Probably the most common cause of bad distortion is feeding a too-strong audio level to a device that cannot handle it.
If one feeds the output of, say, a music synthesizer into a microphone input of a recorder or console, major distortion may result as the microphone input is designed to take a very low signal level and the high line level output of a synth will overdrive it. There are some devices, especially small hand held recorders, that are designed to detect input level and compensate for a stronger signal. Often people not well-versed in sound technology will see any input as an available port for any kind of signal. Of course, if a person feeds a microphone into a line level input then there will be no, or extremely low level, sound causing some to say the input “is not working.”
As we will see in the next segment, sometimes level distortion is used creatively.
If a power amplifier drives a speaker beyond its limits, that will cause distortion and often damage to the cone, meaning that, after the damage, the speaker will continue to exhibit much distortion even for lower level signals. One should watch levels carefully if feeding a power amp directly into speakers. But today’s “active” speakers, where the power amp is part of the speaker, are usually designed to the integrated power amp does not overdrive the speaker that is part of the device. At a college studio where I taught we finally went with active (integrated) speakers after students tended to damage cones, which were replaced several times, in passive speakers driven by a high powered amp. (“Hey man, turn up the volume!”).
Impedance Mismatch
The term “impedance” refers to the resistance of an electrical circuit. In the early days of hi-fi it was important to “match” the impedance of and output device with that of an input device. This if one was driving a speaker that had a 8 ohm impedance, one had to use an amplifier with an 8 Ohm output. Today’s solid state devices have pretty much solved the problem. See https://hyperphysics.phy-astr.gsu.edu/hbase/Audio/imped.html.
I only mention impedance because most consumer gear is designed for low impedance headphones. Many professional studio headphones have a 600 Ohm impedance and, if plugged into some computer products, may produce a low level sound output. If the consumer device has enough power then one simply has to turn up the gain. But higher impedance headphones are often not satisfactory if attached to, say, one’s cell phone. Just a heads up.
Acoustic Reflections
The acoustic environment of a space can also modify, or “distort,” a live sound. Sometimes this can be pleasant, as when a choir performs in a space with lots of reverberation. But speech in such an environment can sometimes be hard to understand. I once attended a concert and small church service in Hallgrímskirkja, a large Lutheran church located in Reykjavik, Iceland. The reverberation time in the church is nearly 9 seconds. The a cappella choir sounded great, but when the preacher stood up to talk, he had to talk in a series of short phrases with lots of pauses to prevent his words from becoming jumbled in all of the reverberation.
But a more serious problem, in my opinion, occurs in small rooms with hard surfaces, such as many classrooms have. There a sound may originate with someone speaking in the room and the hard walls immediately bounce the sound back into the room. The delay is short so it does not sound like reverberation but when the direct sound (from the person talking) combines with a fairly loud reflected sound, certain frequencies are cancelled out and other exaggerated. This almost never improves the sound. It is acoustic distortion.
There are ways to minimize the problem, but most classrooms are built on the cheap and architects seem much more concerned with the visual aspects of the room, than the sound. Alas.
THE “WANTED” (SOMETIMES)
Level Mismatch
I find it very annoying when someone making a speech talks into a mic that is amplified but the sound that comes out is distorted and hard to listen to. This could be caused by a microphone preamp being fed too loud into a power amp, or a power amp overdriving loudspeakers, etc. Maybe even just a defective mic, but certainly it is unwanted and undesirable.
But now let me talk about the evolution of modern electric guitars. (This is an extremely short incomplete, history.) Guitars and similar stringed instruments (the lute, mandolin, ukulele, etc.) have been around for a long time. Historically they were completely acoustic, but if someone needed to produce a louder sound the played the instrument close to a microphone. When I was growing up in the 1940s and 1950s some guitar players were attaching a contact mic to their acoustic instrument to pick up the sound directly from the sound board. Some players and instrument builders – a key one being Les Paul – developed electro-magnetic pickups that detected the movement of the guitar strings before the sound was reflected from a sound board. Since the resonance of the sound board was no longer so important, the solid body electric guitar was developed. The original purpose was just to make the sound louder, and often a bit warmer (with less metallic “twang”). I knew Les Paul and heard him play jazz on his solid body guitar many times. The sound was very smooth but loud enough that it could be the dominant instrument in his jazz ensemble, but it was distorted only to the extent that an electro-magnetic pickup does not produce the exact same sound as a fully acoustic guitar.
However, other popular music guitarists, especially those playing rock music, starting taking the output of the electric guitar and using it to overdrive a small amplifier into distortion and the “fuzz box” was created. The early players used overdrove tube preamplifiers to get the “fuzz” sound but as solid state gear develop specialized “pedals” were created to produce the fuzz sound, which in the last analysis is mostly some form of overdriving an audio circuit.
I personally am not fond of distorted electric guitar as I always hear at as “something gone wrong” in an audio circuit. But guitarists who use it point out that the box increases the number of harmonics, giving the guitar a broader sound. There is also I way, I think, that the distorted sound creates a “bad ass” creative feeling for those into that style of music. Counter-culture rock musicians I knew as a college student seemed glad that their loud and distorted music annoyed their parents. However, I know avant garde “classical” composers in New York City who use that instrument in their non-pop music. So it all seems to be a matter of personal taste and aesthetic goals.
Reverberation
Electronic reverberation is designed to make something sound like it is recorded in a large reverberant space like a church or large concert hall. A number of methods have been used (involving springs, large metal plates, mics and speakers placed in a large stairwell, etc.) but modern digital reverbs can be quite realistic, or not, as desired.
One advantage of electronic reverb is that it can be applied to only certain input channels on a mixing console, so not all instruments would be affected the same as they would playing in a large reverberant space.
Like other tools it has become used in creative, and sometimes exaggerated, ways to creative a special sound, especially in pop music. There are even guitar “pedals” that add reverb to the output of an electric guitar, though some guitar amps also have that feature.
True reverberation is not linear. As the direct sound (from the stage or whatever) hits the other surfaces in the room, some frequencies may be absorbed more than others before being reflected back into the room. In addition, weaker frequencies, mostly high frequencies, die out before the lower and mid range frequencies. Also, the overall sound pressure decreases as the air impedes the wave motion. Acoustic engineers measure the overall sound pressure decrease. The measurement of “reverberation time” usually refers to how long it takes the sound to decay 60 dB and is thus called RT60. The complexity of true reverberation made it hard to simulate in a highly realistic fashion until the advent of digital units though the sound of some analog artificial reverbs, especially “plate” reverbs were valued in pop music for their own sound.
Delay
Sometimes one wants sound delayed with all frequencies staying the same, and the time highly controllable. This is called ”delay” and is sometimes used creatively in pop and electro-acoustic (electronic) music. As a young student I did not have access to good artificial reverberation and made a someone crude delay circuit using two tape recorders. The main sound would be fed into recorder number 1 then the tape fed some distance to a playback recorder.
A composer who used delay for creative advantage was Steve Reich in his piece It’s Gonna Rain. He recorded a street preacher who shouted “It’s Gonna Rain” while preaching about the end of the world. He made a tape and duplicated it.
He then started to play the two complete copies of the recording on two separate tape recorders. But analog tape is not consistent in terms of time due to variations in capstan speed, stretching of the tape, etc. So although the two recorders started playing the recording “in sync” gradually there developed a time difference between the two tapes. Reich could enhance that by briefly touching the feed reel of one of the tape recorders. The piece has two sections and longer delay is used in the second. I am oversimplifying for brevity but one can learn more about the piece by searching for “Reich: It’s Gonna Rain” on Google. It is also considered an early example of minimalism.
Other experimental composers have used delay in various other ways. (Maybe I will do a post just about delay music in the future.)
Vocoder and other voice processors
Early on in the experimental electronic music world some composers used a vocoder to create a distorted vocal sound that was more like electronic music than traditional voice. The vocoder was originally designed in the 1930s at Bell Labs as a way to process voice for certain kinds of telephony. But composers found its artificial sound an interesting addition to creating electronic music. In the modern world of telephony I don’t think one will find vocoders in use, but they are popular effect for some electronic music and can even be found as plug-ins for digital workstation software.
Other electronic vocal distortions have also been produced in addition to the vocoder usually aimed at creating the sound of science fiction movie robot speech.
Auto-Tune
The voice processor that has taken over the recording industry big time is Auto-Tune. It was invented by a computer software engineer working for another industry but who also had an interest in music. At a convention where computer software for music was discussed, the wife of salesperson said something like “I sure which something could be invented to make me sing in tune.” The oil industry programmer, Andy Hildebrand, realized later that he could apply some of the techniques he was using for his “day” job to creating software that could correct the pitch of a singer who occasionally sang notes out of tune. He produced a version, which he called “auto-tune,” and took it to a NAMM (National Association of Music Merchants) show not knowing whether it would even be something the music industry would want. The product became a major hit at the show and engineers lined up to order copies.
It was originally designed to be subtle – just a slight adjustment so a singer or instrumentalist would sound like they “nailed” a specific note when in fact they hit it flat or sharp. But then a number of pop artists, such as Cher, started using it with exaggerated amounts of “correction” to produce a kind of distorted voice that intrigued listeners. Now most every recording studio has that software as part of their digital audio repertoire of tools. It can be applied after a track has been recorded and has saved countless retakes due to small note errors and also is used to create a purposefully distorted sound, especially for singers.
Other Concept Art Using Forms of Distortion
Any number of experimental artists have used forms of sound distortion to create works of art. One of the most intriguing to me is Katie Paterson who used a novel form of audio distortion in creating her piece Earth-Moon-Earth. She took a recording of Beethoven’s Moonlight Sonata and converted it to Morse Code. She then bounced that signal off the moon and captured the code as it came back from the moon and re-converted it to music. The moon’s surface distorted the music in certain ways (creating gaps and other errors) and the resulting piece which she refers to as having been “filtered by the moon” became her finished art piece.
On that unusual note I will leave the discussion of distortion – the good and the bad – there. Even sound and music professionals can often disagree about whether a certain form of distortion (change) is good or bad.
This post cannot discuss every form of distortion used for effects and some readers probably have their own techniques. It is always worth experimenting!
Leave a Reply