In art, there’s a term “negative space” which refers to all the space in and around your actual subject. So if you were drawing a picture on paper, the ink or pencil defines your subject and all that blank space around it? That’s the negative space.
Negative space is important because it’s through this space that you guide your viewer to the thing that matters. It’s in this space that your viewer gets to take a break – their mind gets to relax from processing the visual and the overall picture becomes easier to look at.
In music, there is still a negative space. There’s a time to stop playing, to take a rest, to skip a beat. Sometimes, the melody stops. Others, the harmony might cut out. In the most extreme, everything will stop for a moment but this is all done for the effect of guiding the listener on a journey.
There are lots of stylistic options that contribute to the negative space in a song but the most comparable for me is the bridge of your song because it’s a construct specifically designed to wake up the listener and prepare them for a final hurrah before the song ends.
In common song formats, you generally have a structure where the verse sort of sets up a story and the chorus is the climactic, super-catchy part that you are humming all day. It’s the part that repeats in all those songs that you sing along with but if all you do is the verse-chorus parts, you’re missing out on a lot of your story. Where’s the intro? Outro? The transition between all the parts? There’s just so much more.
When the music is repetitive, it’s easy to tune out, so we have to have a way to guide the listener and wake up those senses. If we think of the verse-chorus as our artistic subject, we can’t just add blank space around it and have a complete picture. Something needs to go in that void and that’s where you end up with the turnaround, the bridge, and some other things. These seemingly minor parts transition the listener between the major parts or wake the listener up to let them know another section is coming soon.
So what’s the point? Those minor parts are important – those parts are filling the gaps to make the major parts memorable. No matter your choice of art form, the subject is only one part of the overall composition. Spend time working on the minor parts too because, honestly, the chorus is only catchy when there’s a bunch of stuff around it that you enjoyed too!
P.S. I like making connections between different art forms because it makes it easier for me to retain the information. I’ve been drawing a lot longer than I’ve been making music and anytime I can connect the two, well, that’s a win in my book.
Let me stop you before you get too far into this and just say – this is more of a technical explanation than what I would consider my normal postings. My intention in this blog was to show the humanity behind the art – to give people a way to connect to the person creating that music you love. You do love it, right? 😛
Jokes aside, the idea was never to venture into a space where I try to teach you something because who am I to do the teaching? The way I see it, there are lots of people with way more experience and knowledge already handling that. But I ended up here because something recently clicked for me and I wanted to share that with people that might have had the same problem understanding this topic. So if you needed to know more about “bussing” in your DAW (or specifically, in Logic Pro), then read on friend! Otherwise, I’ll see you next week!
Magic Bus, Yellow Bus, Not a Bus, Bussing, Signal Routing, wtf?
I struggled with fully wrapping my mind around “bussing your reverb” because whenever I would see it in a tutorial (written or video), it’s a concept mentioned as being absolutely necessary to making your mixes better but it was never really accompanied with a visual reference of how it works. So let me try my best at breaking this down here.
First off, consider a single track in your production. It’ll be simplistic typically with an input, plugins, and an output destination.
You can add any amount of effects on this single track including your reverb, compressors, delays, echo, whatever. And that’s where all of the tutorials I’ve seen will tell you that you should not do this because it’s mess, it will waste CPU cycles, or my personal favorite: it just makes you look bad because all professionals use “bussing”.
So what is bussing?
Well, what if I told you that your input to a track (bus) could be the audio signal output of another track? As far as I can tell, that’s all it really is: basic audio signal routing. What gets confusing to me here are the terms used and ways that these things get referenced in the tutorials so consider this diagram:
I know, the complexity increases in this diagram a bit… but here’s my explanation.
Let’s say Track 1 is your MIDI keyboard input and you’re using the Steinway Grand Piano in Logic. Track 1 is going to look a little like this now:
This channel strip is predefined with a bunch of settings, including some sends, but I need to add more. First thing, I want to add a little reverb to my piano track that matches the reverb I’m using for my other instruments. Second, I need to modify the piano sound in a specific way to match the song style. Probably most important, I don’t want to muck with the settings as shown above because if I were to change to a different instrument, most of my settings would be lost. So there are two things to do…
Thing 1 / Technique 1
The first thing is marked as “signal sends” in the screenshot which corresponds with the “sends (bus track)” in the diagram.
This technique is what I’ve seen commonly referred to as “bussing” and is extremely useful in handling any additive effect where you still want the whole, original signal to be available in your final output. In short, the software clones your audio signal, at a desired level, to a specified location while also sending 100% of your audio signal to the output location.
You’ll most likely use this when adding reverb to your tracks allowing for a level of consistency across all your instruments that won’t muddy the original sound. You probably wouldn’t use this for your compressor or limiter because, well, it won’t impact the original audio signal which is what you would need.
Thing 2 / Technique 2
The second technique is redirecting the actual output. By default, this goes to “stereo out” but you can change this! (Note: You may also see “stereo out” referenced as the “master track” or “master bus” in places but not here. Here, it will be referred to as “stereo out”.) The concept is similar to Thing 1 except we send the whole signal to the output instead of cloning the signal. For clarity, this technique routes 100% of the audio output to a specified destination.
This is an important technique when you need to make fundamental changes to the audio signal without retaining the original sound. This becomes even more important when you want to apply the same technique to multiple tracks. For example, assume you have two separate piano tracks and you want to apply the same filter plugin to both. Well, set both tracks to the same output bus and apply the filter on that bus instead – your computer will thank you for being more efficient, I promise.
Here’s a visual that I hope helps emphasize the setup (with the unimportant strips blurred out). What this shows is two piano tracks with an output set to Bus 17 where an equalizer and AutoFilter plugin gets applied before going to stereo out.
You may also note the first technique in this visual where one piano track and Bus 17 are sending audio to Bus 23. (Bus 23 is being used for some small reverb effects.)
Things / All Together
So, if we return to my complex flow diagram above, but replace some of the diagram bits with screenshots from Logic Pro, it looks like this:
In this setup, I have a Steinway Piano instrument playing some midi notes. I clone the signal and send it to Bus 23 where I apply the ChromaVerb plugin to the new audio signal. The whole audio signal from my piano track is then sent to an output which I have specified as Bus 17 where I apply a Channel EQ and AutoFilter plugin. Bus 17 and Bus 23 send the resulting audio signal to “stereo output”.
These techniques for routing your audio signal should be helpful in a number of scenarios but the most important one for me was efficiency. These techniques allow you to apply the same plugin to multiple audio sources which saves you time but it’ll also save your computer a few cycles – letting you layer in just one more sound. Don’t take my word for it, try it!
Anyway, I could drag this out but there are a lot of tutorials out there that will give you a much deeper dive into all of this… The main thing I wanted to add to the conversation was the diagram… everyone needs a diagram.
P.S. One of my favorite moments when researching this particular topic was landing on one musician saying “imagine a yellow bus that takes people somewhere else” and another musician saying, “it’s not at all like a bus with people.” I’m not going to take sides because the right answer is whatever works for you. In both cases, though, I was not satisfied with the description simply because there was no diagram to help me visualize it.
My advice? Spend a little time learning some basic documentation skills. It’s worth it and it’ll help your audience.