– Guest blog by Audio Dev Academy
Audio processing vs synthesis
As a plugin programmer you manipulate digital audio signals in code. In my last two mini blogs about digital audio waveforms and creating a volume control, I explained how you can change the volume of a digital audio waveform by multiplying the sample values in the waveform with a scaling factor. Applying an operation like this to an existing audio signal is referred to as ‘processing’, which is what most audio effect plugins do. However, it is also possible to create your own sample values in code and to generate artificial audio waveforms that you can play back from within a plugin. This is called ‘synthesis’, which is what most synthesisers do.
To beep or not to beep
When using an effect plugin on an audio track, your DAW will pass the sample values that make up the audio track to the plugin, and will retrieve the sample values again after the plugin is done processing them. When using a synthesiser or virtual instrument instead of an audio effect, there is a subtle difference. Because a synthesiser generates sound on its own, it is passed all zero sample values and is expected to fill these values with meaningful audio waveforms. A very basic example of this, would be to fill the sample values with random numbers between 1.0 and -1.0, which would result in a white noise audio signal. We are going to attempt something slightly more complex. The most basic musical sound signal is a ‘sine’ wave: a smooth up-and-down oscillating motion over time. It sounds like a (somewhat uninspiring) continuous beep, but it is great for the purpose of learning synthesis and testing processing algorithms. We will do a little thought experiment to generate a sine wave of 1000Hz. So, no math for now, just conceptual thinking.
Stretch and squeeze
Let’s say you have to create a single sine waveshape, that has the duration of precisely one second. In my blog about samplerate, I explained that the samplerate is the number of samples used to represent one second of digital audio. For this example, let’s use a samplerate of 44100, which means we have to create a single sine waveshape that spans over 44100 samples. Instead of using math, just imagine that the list of sample values starts at 0.0, then smoothly moves up to 1.0, then down to -1.0 and finally back up to 0.0. Now, because ‘frequency’ is defined as the number of waveshape cycles per second, playing back and looping this waveform would create a sine wave with a frequency of exactly 1Hz – a very low note below the frequency threshold of human hearing. However, this means that you can create a sine wave of 1000Hz by stuffing 1000 of these waveshape cycles in 44100 values. So, the amount of waveshape cycles you stuff into one second of audio determines its frequency. Alternatively, try to visualize that decreasing the frequency of a sine wave stretches out the waveshapes over time, and increasing the frequency squeezes them. Get it? I’ll show you the math soon!
Audio Dev Academy is an exciting online environment for like-minded programmers, musicians and audio engineers who want to learn how to code audio plug-ins and virtual instruments – to be launched in 2019. In preparation for the launch, Audio Dev Academy will publish a series of blogs and eBooks about programming and the inner workings of audio plug-ins and virtual instruments. If you want to know more, find Audio Dev Academy on Facebook.