Wednesday 22 January 2014

Creative Convolution Part 1 - Resonate



Convolution reverb is a great thing. Its usual use - being able to capture and re-create spatial acoustics, is useful for a range of tasks such as fitting dialogue in a scene, or just creating a sense of space. But it's worth taking a moment to think about what is happening during this process. One sound file is effectively being filtered through another, with the convolution reverb filtering the frequency content of each sample through all the samples present in the IR (a simple explanation, but adequate here). The impulse response files themselves effectively hold a snapshot of the acoustic data of a space, or at least from a point in the space. This data describes how the room responds acoustically to an impulse or short burst of broadband noise. So essentially, you could think of this in a different way, about sound files as containers of information - acoustic information. But this data is made up (like any sound file) of frequency content over time, so the convolution reverb process is potentially useful for much more than just recreating the acoustic space of a room; it is effectively a kind of filter, and that is the main subject here - an exploration of that idea. 

I once heard a really useful definition of timbre: “spectral footprint over time”.
From a sound design perspective getting the correct timbre for a sound is important as it connects the sound to its source. Interestingly, we naturally identify sounds back to their source (or object which created the sound) rather than by their acoustic properties. This explains why timbre is so important - it immediately describes the source, ie hollow or metallic, and gives the listener an impression of what created the sound.
 

If you found your way here, I expect you might be familiar with this SoundWorks video for the making of Inception. I really love the bit at 3:20 where Richard King is describing the subwoofer recordings they made in the warehouse; there is something fascinating about how powerful and complex the natural resonances of the space are.  

 





I've long had an interest in physical modelling as a technique for designing sounds, so when I saw that video I started wondering what it would take to achieve the same results artificially. Would it be possible to model the space and materials of a location like that and produce some useful sounds? 

I think that understanding some of the principles of physical modelling are useful not just for creating sounds to use, but also to help explain how and why sound works in the way it does. I've done a few experiments in Max using filterbanks and basic waveguides to try and simulate real-world physical resonances with varying degrees of success, but experimenting with convolution about a year ago, I discovered some interesting techniques which are related…it started with a set of recordings like these:

 

These are impacts on various metal objects recorded with a contact transducer. If you’ve ever used one of these, you will know that one of the great things about recordings made in this way, is that they are completely dry, this is because the transducer only picks up the vibrations traveling through the object itself. It means that you can go anywhere to record these sounds, even next to busy roads where a conventional microphone recording would be useless because of noise from the road. So I recorded a whole library of these, experimenting with different objects and different methods of striking them.

It was while doing this that I had the realisation – this process is exactly the same as taking impulse responses of rooms, I was just collecting acoustic data. But when taking an IR of a room, you are collecting data about how the room behaves acoustically, and these recordings contain data about the resonance properties of the material. The broadband noise used in recording an IR in a room, such as a balloon burst, is comparable to the impulse created when striking an object. They are both a burst of broadband noise followed by their effect on something – in the case of the room it is the reverberant characteristics, with a material it is the resonant properties.

Here’s an example of these sort of sounds used as IRs in a convolution reverb, you will hear a dry vocal sample, then the sound of the impact recorded with the contact transducer, then the vocal sample through the convolution reverb with the impact loaded as an IR. You can hear how the original sound is filtered through the resonance properties of the material:



It’s interesting to note how sounds with a long decay still create a sound with a reverb-like quality to it, but sounds with less decay create more of a filtering effect. 

Around this time two other things caught my attention. Firstly, Alex Harker and Pierre Alexandre Tremblay, both from Huddersfield University, released the HISS Tools - a collection of Max objects designed for pretty much any convolution operation you can think of (and a few more besides!). If you’ve used the Max for Live convolution reverb before, they are at the heart of it. For sound designers these are an amazing addition to Max, as they allow anyone to integrate convolution reverb into any patch quickly and easily. Huge thanks to them for making these objects publicly available.

With this in mind I started sketching down some ideas for a Max patch that would take advantage of these. It began as a kind of configurable resonant space, partly inspired by techniques used for mixing sounds together in game audio, and partly from the experiments I’ve been describing. 

The second thing that caught my attention at that time, was this post at Designing Sound by Douglas Murray, focusing on his use of convolution reverb  to create infinite airfill using white noise as a sound source. Up until this time I had been using a range of sounds to ‘excite’ the IRs, but this was another direction, and really made so much sense for creating ambient, atmospheric sounds. It’s a great technique, and as an extension you can increase the resonance by stacking up multiple instances of convolution reverb loaded with the same sound as an IR. Here’s an example similar to before but with one, two, three and four instances of the same reverb running in series.

 
When using white noise as a source, with one instance of reverb you will always hear the noise coming through to some degree, but with two or more instances the sound becomes progressively more filtered and the dominant harmonics of the IR become accentuated.

Be careful if you want to try this inside a regular DAW – there will need to be some heavy gain reduction somewhere in your signal chain, otherwise extreme clipping will result!

Sounds of this kind are really useful for creating evocative ambiances or adding some extra resonance to metal hits and scrapes. I like the idea of having a palette of sounds like these, based just on texture, using these in layers the same way a painter would with oils on a canvas.

So how about expanding this idea? There is certainly scope with this. I spent a bit of time designing sounds for this purpose, and that is definitely worth pursuing, but I’m not going to go too far into them here. Instead, I want to talk about another technique for generating interesting sounds for use as IR’s. Whilst designing specific sounds, I started experimenting with using music tracks as a source for IR's. I’d snip out small sections of music with interesting harmonics then load them as IR's into multiple reverbs and play noise or filtered noise through them. The resulting sound is like a constant smear of all the frequencies present in the music, sounding similar to the results you can achieve with granular synthesis but with a richer sound. Bored of slicing music up manually I made a small utility patch to automate the process. It takes a sound file and chops it into smaller slices. The patch is fairly crude, but it works fine for the purposes of this article. To work flawlessly it really needs to be a phasor-synchronised system, (feel free to improve it if you like, but send me a better version if you do!). It’s reasonably straightforward to use, just follow the instructions.




Here is a link to the patch:

Chopper



So that's a bit about the history of where the idea for this device originated. It actually references a whole bunch of work shared by other people, and tries to bring those ideas together and create something new. For me, this is all about tool building. Taking a process and re-thinking how it could work, combining existing technology in new ways to create new possibilities. It takes this process which in a DAW is very clunky and frustrating and reimagines the process. Everything here would be achievable with separate effects within a DAW, but in reality what you can achieve here in minutes would take hours of setting up and tweaking. 

Here is the result, it is a device which explores resonance, filtering and spatial positioning.





There are three sections to the patch, The sound source, the nodefield and the effects section. The sound source creates the sound used to feed the IR section of the patch. There are currently three options here:
  • Noise Gen - A simple noise generator with amplitude envelope and sweepable filter.
  • Grain Player - A basic granular file player, built around Timo Rosendal’s Grainstretch~ external.
  • Loop Player - A vari-speed looping sound player with pitch and amplitude envelope capabilities.

The middle section or what I've called the node field is the unique part of the device. Here, there are eight FX lanes, each contains 2 IR reverb objects in series. This is the signal flow inside each:




The output volume for each effects lane, or node, can be linked to the node weighting (ie, how far into the node area the crosshairs are). This is a linear value form 0-100% volume. Panning can be linked to the position of each node across the X axis of the node field. If this is turned off, the user can adjust the input volume of each by using the multislider at the top of each fx chain. Pan position can also be linked to the position of each node across the X axis of the nodefield. This generates a stereo field across the X axis of the node field, so sounds can swept across the resonators. There is a central system which distributes audio files to the convolution reverb objects; this allows you to put all the sound files you want to use as IR’s in a folder, point the patch to that folder and then quickly choose between them from a menu system. A pair of LFO's are linked to the position of the cross hairs in the node field, these can be used to sweep across the nodefield, providing a spatial approach to mixing. There is also an envelope control labelled 'input gain scaling' which defines the shape of the gain slope, so you can have linear, exponential or any other gain curve, or even more complex, experimental patterns.

The effects section below comprises of the following:
  • Harmoniser - Splits the audio stream into six, providing pitch shifting, delay and pan on each channel.
  • Filter - A basic multi-mode filter
  • Comb Filter - Sweepable comb filter
  • Distortion - A combined bit crusher and overdrive distortion box. Signal path is crush>drive.
  • 2x VST/AU effects - Load in whatever you like here.
  • IR Reverb - Very basic convolution reverb. Comes pre-loaded with some IR's from the OpenAIR library 
All the effects are combined using a matrix, so you can route audio through them in any combination, in parallel or in series. There is no feedback protection, so be careful there.

I expect you're asking what does it sound like?

Well I've been using it in a particular way, and have had that in mind through the development phase. But really it is just a combination of playback devices and effects, so use it however you see fit. Having said that, here are some examples, these are straight out of the app using the built in effects - no fancy external plugins.

First up, a selection of static drones created with the white noise source. Note how the frequency fluctuations of the white noise add subtle but continuous variation to the drones:



These are some metallic resonances created by using the contact mic recordings above as IR's


Here is an evolving drone which also uses white noise as a source but sweeps over the IR's using the LFO:




This is a granular example. It takes a recording of a music box and plays it backwards with some position variation, it then filters this through some snippets of a female choir used as IR's




Here is some more radical granulation - frozen grain sweeps with some extra harmonic richness from the IR section.



These sequences use white noise as a sound source. The noise has a rhythmic amplitude envelope and filter sweep applied to create an almost steam-like mechanical sound.