Reaping in the Dark – Part 2.1: Do You Hear What I hear?

Welcome back to Reaping in the Dark. Today, we will be digging into the first article in Part 2, which will cover some of the basic concepts you will encounter while working with Reaper, and which will be the building blocks for your work once we start getting hands-on. In this section specifically, we will begin by looking at what we will be capturing and manipulating no matter your application for Reaper, and that is sound itself.

 

Sound: What Is It?

 

In its most basic sense, what we perceive as sound is produced by sound waves, which are comprised of fluctuations in pressure in the medium the sound wave is traveling through. Make sense? If yes, great! If not, no worries. I, after all, failed high school chemistry my first go around and didn’t have a hope of making it to physics, so I’ll break this down into terms even I can understand.

 

Note: The following video has no sound.

 

Consider a guitar string, as shown in the video above. When plucked, it produces vibrations which travel through the air around it, creating the sound waves we then perceive as the sound of the note that has been played. What is going on when the string is plucked gives us a glimpse into how all sound around us comes to be. When a note is played by plucking the string, the string oscillates, creating what are now scientifically predictable changes in pressure to the air molecules around the string. To oscillate simply means to move back and forth at a regular speed. These oscillations, or vibrations of the strings, are so small and happen so rapidly that they are typically not able to be seen, though we can certainly hear them. Each cycle of this forward and backward movement creates what are called compressions and rarefactions.

 

A conga drum, which is a tall, barrel-shaped drum with a single head.

 

To demonstrate this, let’s imagine a drum. A drum consists of a shell (the “body” of the drum) and one or two heads (membranes that can be made of any variety of synthetic or organic materials which is stretched over the opening on either end of the shell). For simplicity’s sake, let’s imagine this drum of ours consisting of a shell and only one head. The head is the part of the drum that is meant to be struck. When the head is struck, it vibrates, becoming our sound source. When first struck, the head pushes into the air that is inside the shell of the drum, compressing it and creating the compression for the first half of our cycle. After this happens, the head changes direction, and moves in the opposite direction, creating space for the air that was compressed to not only return to it’s previous state, but to push back in the opposite direction than that where it was compressed, creating the rarefaction that makes up the second half of our cycle.

 

This concept of an oscillator is something that you will see very often throughout these tutorials, since it is the basis for so much of what we work with as musicians and producers. There are countless examples similar to that given above of the guitar string. Every plucked or strummed string instrument operates on this concept, and even bowed instruments work in a similar fashion, only differing in how the vibrations of the string are set into motion. Wind instruments, on the other hand, operate on vibrations of an air column inside the instrument. The speakers which we use to listen back to what we’ve recorded work on this principle as well, using a vibrating membrane to recreate the sounds which we have captured. Even the human voice can be viewed from this perspective, since the sounds we produce with our voice are created by the vibrations of our vocal cords.

 

What’s in a Sound?

 

A tuning fork.

 

Now that we know what exactly sound is, it’s time to take a quick look at some of the ways it is examined from the perspective of audio production. We will take a look at two common approaches for this, which will provide the basis for much of what we will cover in later sections.

 

First, it’s important to understand how sound is perceived. How we approach this can be broken down into two elements: pitch and volume. Adding to what was covered in the previous section regarding how sound is produced, these two elements are referred to as frequency and amplitude.

 

Let’s revisit our example of the plucked guitar string. The pitch of the note produced by this string as we perceive it is determined by the frequency of its vibrations. The faster it vibrates, the higher the pitch. This is why tightening the string by tuning it higher produces a higher note, as a tighter string with more tension will allow for much faster vibrations. These vibrations are measured in Hertz (Hz), and are calculated based on how many back and forth cycles occur in a second. For example, the standard of tuning to A440 uses the note produced by a frequency of 440 cycles a second as a reference, producing the pitch of the note A above middle C on a piano. The range of human hearing spans from 20 hZ to 20,000 Hz (or 20 kHz).

 

Amplitude, on the other hand, is measured by the size of the cycles; how much the vibrating object moves away from it’s center. We perceive this as volume. Let’s look at our string again. A guitarist produces a louder sound by picking or plucking harder at the instrument’s strings. This causes them to vibrate with more energy, which we then perceive as a louder sound.

 

As important as these two measurements of a sound our, it would be a very boring world if this was all a musical sound were made of. That’s where envelopes come in. Though most relevant in electronic music, the idea of envelopes can be very easily understood by looking at some of the sounds of instrument’s we’re all familiar with.

 

The most common envelopes you will encounter are those referred to as attack, delay, sustain, and release (collectively referred to as ADSR). Attack refers to the speed at which a sound rises to it’s full volume. Decay refers to how quickly a sound decreases from the peak at which it arrived from the attack. Sustain refers to the period a sound remains active after its decay. And finally, release refers to how quickly a sound dies out completely after its period of sustain.

 

Let’s look at some sounds you are probably familiar with from the perspective of their shape according to these envelopes. First, let’s listen  to this recording of a single note on a piano. This sound has a very strong attack, since the sound hits its peak as soon as the note is struck. From there, it has a relatively even decay, with virtually no sustain since a piano has no mechanism by which it can hold a note at a specific volume. The release is as quick as the decay, since the note releases as soon as the key is lifted, killing whatever sound is still audible by that point.

 

 

 

The next clip is a short audio example of a sine wave. Don’t worry if you have no idea what this means. Just know that this is one of the most basic sounds a synthesizer can produce. With that in mind, listen to how this also has a strong attack, but unlike the piano example above, has virtually no decay but a long sustain. Also unlike the piano, the release of this sound is equal to the attack, since it stops as quickly from its peak as it took to reach that peak.

 

How Does Sound Measure Up?

 

Measuring sound is not as straightforward as it may seem. After all, a computer does not have human ears to convert sound waves into what our brains perceive as sounds, and our brains have no way to convert the digital data contained in a sound file into what we perceive as sound. Both means of interpreting sound can be measured by the unit of the decibel, so this can lead to some confusion. So let’s toss that confusion aside, and see how exactly we measure what we’re working with.

 

To first understand how we measure sound, it is important to know that the decibel as the unit of measurement isn’t necessarily a defined amount, but is instead a means by which we compare the differences between two measurements. To say a sound’s loudness is 100 decibels doesn’t make sense, unless you know what you are comparing it to. With that said, here are two ways sound is measured.

 

Sound Pressure Levels

 

The most commonly known application for the decibel as a unit of measurement is that of sound pressure levels (written as dB SPL). This measures, as the name suggests, the differences in pressure caused by a sound, as explained above regarding how sound is produced. These measurements are referenced against 0 dB SPL, which is the quietest level of sound a normal human ear can detect. For perspective, regular conversation lands around 60 to 70 dB SPL, while shouted conversation goes upward of 90 dB SPL. A rock concert or jet engine can land near 140 dB SPL, at which we reach what is refered to as the threshold of pain, which is the point at which a sound becomes so loud it is painful.

 

Full Scale

 

In the world of digital audio, our reference becomes what is called full scale (written as dBFS). This is a rather tricky thing to understand, given that the basis for this reference is also a measurement of 0, however in this case we go down into negative numbers instead of up. In other words, a measurement of 0 in this case becomes the maximum volume a sound can achieve before it begins to “clip”, which we perceive as distortion.

 

More on the Decibel

 

We’ll end with some last important points on the decibel. It is important to understand that the decibel as a unit of measurement is a logarithmic scale. This means that we use this measurement as a sort of shorthand. The human ear has such an incredibly wide range of volume it can perceive, so using this easier way of referring to levels becomes so much easier instead of using the much longer, complicated numbers that we would come across otherwise. To demonstrate this, let’s listen to a couple of audio examples.

 

The decibel itself is already proof of the tremendous power of our ears, as it literally refers to a tenth of a bel, a unit of measurement named after scientist and inventor of the telephone Alexander Graham Bell. Our ears can detect changes in sound as small as a decibel in difference, with a definite, noticeable difference at a difference of 3 decibels. To get a sound that is twice as loud, you need to have an increase of a full 10 decibels. Listen to the following two clips, and take note of how much of a difference you notice. Both are clips consisting of repeated bursts of white noise, with the first decreasing in volume by 1 dB every time the sound plays, and the second by 3 dB.

 

 

 

Final Thoughts

 

If you’ve made it this far, great job. This can be some complicated stuff, but it is important to understand this so you can get the most out of future articles. Read through this a couple more times if you don’t understand something, and if you still don’t understand it, drop a comment below or reach out through any of the social media or contact links on the site. This will probably be one of the most complicated articles of the series, so it’ll be a lot smoother sailing from here on out. Until next time, happy reaping!

 

Support the Series

 

This course is provided free of charge, and is supported by contributions from readers. If you have benefited from this course and would like to give back to support future content for the course and others like it, please consider making a contribution in any amount by buying me a coffee through Ko-fi.

 

Course Navigation

 

Previous Chapter

Next Chapter

Leave a Reply

Your email address will not be published. Required fields are marked *