– Guest blog by Audio Dev Academy
It’s just such an intuitive concept for human beings that it can be hard to actually explain sound, even when you stop and think about what sound vibrations really are. As a small thought experiment, when thinking of sound and digital audio, I’d like to start with the ear. More specifically: the eardrum.
You can view the eardrum as a thin surface – like the skin of a snare drum – that has the ability to move backwards and forwards in space. When you are in the vicinity of a sound source, the air around you will compress and expand by the soundwaves produced, moving your eardrum backwards and forwards. However complex the sound – from a bird singing to a full orchestra – your eardrum is only able to move backwards and forwards. But just from this simple movement, your brain is able to construct your whole listening experience, which is pretty incredible. Although I would be extremely interested to know how the brain does all this, I dare not to enter the realm of neuroscience, and will accept this to be magic. For the purpose of reading this little blog (and the rest in the series), I would advise you to do the same, or you are going to have a long, long day ahead of you 🙂
So, sound can be represented by a single movement backwards and forwards, no matter how complex. This should make sense if you produce music on a computer, as the sound monitors in front of you also comprise out of one or perhaps several speaker cones that can only move backwards and forwards in space. Yet, they are able to represent a full mix with all kinds of different instruments playing at the same time. The fact that it is possible to represent complex sound as a single movement over time, makes it pretty easy to create a digital representation of it. To accomplish this, you just measure the sound pressure in the air at really short intervals in time, which you store in a list of ordinary numbers on your computer.
This is essentially what happens when you plug a microphone in your soundcard and press record in your DAW. The microphone continuously measures the sound pressure in the air, your sound card ‘samples’ the amplitude of this continuous signal at short intervals in time, and your computer stores these values as a long list of numbers, usually according to a specified format like .WAV or .AIFF. Your computer can then playback the recorded sound by sending the same list of numbers back to your soundcard, where the numbers get turned back into a continuous signal that moves the speaker cones of your sound monitors backwards and forwards again, producing complex sound.
The cool thing is, that once sound is stored on a computer as lists of numbers, you can manipulate these numbers with DSP algorithms – the good stuff! – to create all kinds of sound effects like EQ, reverb, compression, flanger and whatnot. More about this soon!
Audio Dev Academy is an exciting online environment for like-minded programmers, musicians and audio engineers who want to learn how to code audio plug-ins and virtual instruments – to be launched in 2019. In preparation for the launch, Audio Dev Academy will publish a series of blogs and ebooks about programming and the inner workings of audio plug-ins and virtual instruments. If you want to know more, find Audio Dev Academy on Facebook.