Yellowgold

The official website showcasing the music of Yellowgold, produced by Jason Howell.

Filtering by Category: Editing

Adding a drum track to un-timed guitar

In an earlier post, I told you about how I wrote Heavy Bones thanks to a dream I had. After everything that I wrote about in that post, I opened up Pro Tools a few days later and hit record. I simple played the guitar part start to finish, no click track guidance at all. My thought was that I'd have the bare bones scratch guitar part to play around with vocal harmonies over the top. All of this was in an effort to find out how I wanted to approach "the real project."

Only, after a few days, I was really attached to that guitar part. Most of the vocals as well. I liked it and considered keeping the song as a simply Guitar/Vocal affair. But over the course of the next few weeks, drum tracks started to appear in my head, a dramatic reveal of the bass and drums being a key part of that idea. I couldn't shake it.

So. I decided, what the heck. How hard could it be to create a drum track around the freestyle guitar? I could tell that, for the most part, though I wasn't playing to a click track, I had kept myself more or less pretty steady tempo-wise. So I felt confident that I could get it sounding nice if I had the time to focus on it.

I'm sure there are a number of ways to approach this. And I'll be completely honest in saying I've never needed to do this before, so I saw it as a way to discover how I might do such a thing and learn in the process. Here's what I did.

1. I played the track from the beginning and on every perceived beat, I dropped a marker on my timeline. When I was done, I had a four minute track with a shit-ton of markers, all signifying a roughly defined downbeat (after all I wasn't perfect on these.)

Dropping markers to the beat

2. I zoomed in at the top of the song to analyze where each marker was dropped, comparing it to the waveform for of the guitar (rectified view gives, imo, an easier way to see when those sounds begin.) I shifted each marker so it hit close to each strum on the guitar, if it lined up with an obvious strum. It didn't need to be dead perfect, but close enough so that, once the drums were programmed and matched to the marker locations, the downbeat of a kick or a snare would hit at nearly the same time as the strum.

3. I recorded a MIDI track from the beginning  and performed a basic Kick/Snare/Hi-hat combo track using my MIDI keyboard. I knew it wouldn't be perfect, but it gave me a starting point to mold the beat around. I kept my playing simple enough to get it right, but added a little variation depending on what each section called for.

3. Here's where it gets tedious and time consuming. Starting at the top of the song, taking the first bar of four beats, I first shifted each drum hit that is supposed to land on the beat to its appropriate place on the down beat, leaving me with the other drum hits that land between those hits to place almost ENTIRELY by feel. I set the playback to loop between the two bars and shifted until they sounded right. Remember, there is no grid to go by. So you are shifting them around to try and keep it all sounding as human and normal as possible.

4. Repeat this on every bar throughout the song.

5. Once I had everything placed, i played through a number of times and the second something jumped out at me, I stopped, zoomed in on the bar in question, and tweaked. 

I spent a few days getting that drum track to sound right, and in the end, it totally worked. There are only one or two times in the track where I can tell that the guitar playing slowed or sped up slightly, and at one point in the track, the guitar slowed down too far for my liking so I actually edited the guitar to make it time correctly and shifted everything down to compensate. I have to say, after all that work, it sounds pretty solid and I'm SO happy I spent that time. I notice in playback that it isn't to a rigid tempo and it almost sounds MORE organic because of it.

Refining the drum track

Let's say I have a project, built by a template that follows my work flow, a tempo for my project, a timeline with markers indicating the various points and movements of my song, a scratch guitar track, and scratch vocals. 

Often, it's easier for me to get a good run through on guitars if the click track I am playing to isn't actually a click track at all. Instead of a lifeless robotic click noise denoting the downbeat, I'll replace it with a Battery drum track that drops a kick drum on the down beat. It gives my playing a nice little backbeat to play along with.

Battery

Sometimes I already have a good indication in my head as to the basic beat of the song. At this early stage it's about getting the general idea down enough to allow me to continue forward, but not getting bogged down in the details. And I almost always find programming drums to be easier done with live performance as opposed to straight up MIDI note editing.

...which is kind of odd when I consider the years I spent programming house music under the Raygun moniker. Almost every note in that project is created manually editing MIDI piano rolls as opposed to live rhythm performance. It's simply a different approach lending itself to a different sound. House music is by nature electronic and very robotic, so having a human feel can actually be frowned upon. Not in every case, but non-quantized beats can be tough for a DJ to manage during a live set, so it's definitely a consideration that is followed rather closely.

The music I produce as Yellowgold is always far more organic... at least it is in my head. The problem is always the simple fact that A, I am not a drummer.... and B, I don't have a drum set to try to become one. I'd certainly love to be able to do it, but from a space and noise perspective, it's simply not that possible.

So my approach with MIDI drum programming usually follows like so:

1. Click track or very basic 2-4 bar loop of programmed drums. As simple as a kick snare combo with some super light hi hats. I don't want the drums to be too detailed at this early stage, I merely want a backbeat that I can groove to while I play. If I create something too creative, I might end up accenting certain parts of the beat that might clash with the other elements I have in mind that hit the track at a later stage.

2. Once I have this, lay down my scratch tracks that give the track a beginning-to-end structure, the skeleton for my entire song.

3. Once I have a sense of the twists and turns, some fills and a more detailed drum sound start to take shape in my mind. Certain parts seem ripe for a particular tom fill. Suddenly crashes on particular downbeats feel appropriate. Once I have an idea for all of these things, I'll create an entirely new region for Battery and start to perform the foundation, live, with the Keystation Pro-88. I've gotten pretty used to the key mapping for drums on the keyboard, so its become much easier for me to play the kit freestyle.

The thing to realize here is I don't expect this pass to be 100% perfect through to the end of the song. I am hoping to nail, with little need for quantization if possible, the feel of the song for around 16 bars, 32 would be great. I want a long enough block of solid drums  that I can feel comfortable copying that throughout the rest of the track as need be, once I've done what's next.

4. When I have that solid block, I treat the notes, sometimes on an individual basis, with select quantization. I don't blanket quantize unless I'm lazy. (Hey, sometimes it happens.) And quite honestly, the material doesn't always NEED that kind of attention. But I find that it feels alive if you treat it with more precise attention than it does if you throw a straight up 8th or 16th note quantize onto the whole block of drums.  I almost always quantize the downbeat kicks strictly, so the downbeat is always dead on. I usually quantize the hi-hats with a light randomization (somewhere in the 6-8% range) so they don't sound too robotic. Snares usually hit very close to down beat, though I might randomize those around 2-4%. I also take a lot of time refining the velocity of these hits... particularly in the hi-hat line. When hi-hats hit as frequently as they can during a drum session, having them all hitting at the same or nearly the same velocity is a surefire way to spot MIDI drums, so I really try to take some time getting those sounding as natural as possible.

I'm exhausted typing this cause I realize how much time I spend on these things and yes, it's tiring. It's by no means my favorite part of the process, but I feel its essential to producing the kind of sound that I'm looking for. It would be way easier to just do blanket quantization and be done with it, but I'd hear the robotic nature of those drums every time I heard that track and it would eat me up inside.

Up next: What do you do when you've recorded an acoustic guitar as your very first track in a project, to no click track, then later decided you want to make that un-timed track the foundation of a production complete with drums and everything else? I did that for Heavy Bones, and I'll share with you what I did.

Finding time to write music

I'll tell you one thing. Being an independent musician with a few other significant full time jobs (father, husband, podcast producer), finding time to sit down and focus on writing music can be a big BIG challenge. I'm certain you know what I'm talking about.

When I've gone too long, it becomes that thing that I have to do, and not necessarily in a good way. "I must sit down and try to work on new music tonight, cause I haven't for so long, I'm afraid I might not know how to do it anymore. And if music is so important to me, then it should be easy, right?"

My experience has told me that writing because I feel like I have to is a recipe for certain disaster. It's like taking that thing you love, and setting a 200lb weight on the top of it, saying "ok, lift for two hours. Go." Doesn't sound like much fun. And creativity is rarely sparked by rules and schedules. You either have something, or you don't. Or you don't, but your mindset is such that you're into the thought of experimentation until you DO in fact have something. In other words, you don't, but you know how to get there.

When I embark on a new album, I consider it a project with an end point. I don't look at it as "oh, I'll wait until the new year and then start a diet and see how it goes" or "I'll quit smoking next week.... Or maybe the week after that, cause I'm going to the bar next weekend." 

If I'm writing material for an album, I've made a decision that the next X amount of time will be full of a lot of time spent doing the things I normally do (father, husband, job), as well as SOMEHOW making time to write, and staying devoted with that effort. Life can get crazy, but if you let yourself slide, who knows when or if anything you write will even see the light of day.

So, I set a goal. I might not know the date on a calendar when this album will see its release, but I know roughly a time at which I'd like to see it happen. Six months from now? A year from now? Doesn't really matter what I pick as long as it's realistic. Make it too short and you set yourself up for certain disaster. And in light of my other jobs in life, I still have to actually find the time in my schedule to allow all of this creative stuff to happen. Nothing wrong with setting a date far out in advance, and then finishing before then.

This next part is what makes setting a lofty goal like this a possibility with everything else I have going on in my life:

Evaluate the times during which you are alone, and the times during which you are in a house full of people. This can guide you in how you plan your sessions. Here's what I mean.

Alone time is tracking time. If I have a song with lyrics that I've finished, and a scratch track foundation for the song in Pro Tools, and I know that Wednesday afternoon from 3-5pm, I'm home alone before my family arrives, this is perfect tracking time. So I set it. And that's all I do during that time, unless I'm incredibly efficient and am suddenly left with extra time to devote to something else. But basically, if it's time by myself at home in the studio, it's the time I get to do the really loud things that are sure to disrupt the other bodies in the house. Or maybe they aren't disrupt-able, necessarily, and don't care when you turn your guitar amp up to 11 to get that nice feedback effect in the song your working on. It's still a distraction to me to think about anyone else around when I'm doing loud things. I consider this the time I do things like track vocals (lot's of layers and multiple takes of each), amped and acoustic guitar, live percussion, and much later in the process, mixing.

Occupied time is editing time. About 3-4 nights a week, I work in my studio from 9-Midnight, late hours that don't constitute my ability to scream a layer of vocals, or record that acoustic guitar track. This time is purely dedicated to arrangement, refinement of already tracked elements (like with vocals, I do a lot of alignment during this time), synth parts, and pretty much anything programmed like drums. All of that can take place without compromise inside headphone land.

Now, if I've played my cards right, I've tracked to my hearts content during those two hours in the afternoon, more than I actually need. (Hence why I recommend you track every vocal part 2-3 times and keep all of those takes) Then, when I get to my evening session, I have an overabundance of options for each part to pick from, based on whatever it is I'm working through. I don't want to think "dang, I didn't get this part today" and then have to wait until my next open slot during the day time to record it. I mean, I will if I have to, but I'm an instant gratification kinda guy. So I want those parts there when I've blocked out time for editing them.

It's all about plotting your open time efficiently to the material you are working with. Boy, this sounds like something you should keep track of to make sure you have stuff to do when you've blocked time for it! Let's talk about that next.

Better DeEssing

Don't just de-ess your entire vocal track. Automate and use it only when its needed. The utility of a DeEsser is normally rather specific, as it is a pinpoint way to compress a particular frequency range. So, as its name implies, its good at removing the harshness associated with the sound of the letter S, among other things. The thought being that you apply the DeEsser to a track with problem S's, find the frequency, then dial in how much of that problem frequency you wish to reduce to an appropriate level.

I've found that it can sometimes be hard to know when you actually need to use a DeEsser. First, your monitoring situation might not be ideal. Good monitors expose these kinds of issues, and bad monitors hide this and much much more. Also, this is particularly hard when you are entrenched in a long session and your ears are already fully adjusted to the sound of your track. Your ears might simply be used to what its hearing. So I tend to act on the need to de-ess when I'm approaching a track for the first time during a session.

The problem with a DeEsser is that it affects the whole sound when applied in a blanket sort of way. So, say you find the frequency to adjust, and simply apply the plug-in to the entire vocal track. When there are NO S's to knock down, it's still affecting the sound of the rest of the vocals. It dulls the sound, likely, and removes some of the clarity. As a result, I usually apply DeEssing in a manual way as opposed to simply dropping in the plug-in and moving on. And yes, this takes a while, but in my mind its worth the effort. So let's take a look at the process, step by step. In this example, I'm using the Waves Renaissance DeEsser, but any will do if you follow the general idea:

  • Place RDeEsser on vocal Comp track
  • Enable automation for the Range control
  • Load Male DeEss Narrow preset as a starting point
  • Turn on Side Chain and solo the vocals you are treating (this allows you to ONLY hear the audio that will likely be affected by the DeEssing)
  • Sweep to find the harshest frequency in your most problem S area
  • Once found, switch back to the Audio setting
  • Adjust the Threshold so the offending S's move the reduction line in the graph. (We will be automating this anyways, so don't worry if it moves when none-S's are audible too)
  • As a quick experiment, pull Range all the way down to hear the extreme. S's turn into lisps when you do this, likely. Sometimes when dealing with effects like this, it's good to test what the extreme setting gives you. Then you have a basis to compare against when trying to set it properly.
  • Now, reset range to 0 and pull it down slowly only until the harshness goes away. You shouldn't need to go very far, hopefully.

Enable automation for the Range control

Testing with extreme range

Write the Range number down!

Now, THIS is where the range needs to be in order to tame those S's. WRITE THIS NUMBER DOWN.

We aren't done yet. The last thing you want to do at this point is just leave the plugin on with these settings. The quality of the rest of your vocals (i.e. the parts that aren't S's i.e. EVERYTHING else) will be affected by this. You can preview the rest of your vocal track and toggle the Bypass on the plugin to see what I mean.

Here's where we automate the range:

  • Set range back to 0 to start off with.
  • Play your vocal track until the very first offending S, and park your cursor slightly before that point.
  • Turn your track view to show the automation track for the RDeEsser Range function. (It should show that its currently set to 0 across the entire track)
  • Draw in the automation AROUND that SS as shown in the screenshot, making sure that at its most dramatic point, the automation hits the Range you wrote down above.
  • Now you've automated the DeEsser to only knock the S down in the range that you specified and at the time that you specified. Afterward, it resets to a range of 0 effectively bypassing the processing.

Drawn automation for the first S

Now, we get the payoff for all of that setup:

  • Highlight that automation you drew around that one S and copy it.
  • And now, play through your track, and anytime you encounter a bad S, paste that automation in slightly before the offending S. If it's not perfect, adjust as needed. The point being that you've created the automation block to treat your S's, so pasting that automation block wherever you encounter a bad S should get you most of the way there to treating them pretty efficiently.
  • And as always, different S's might require different levels of range (or even different frequencies!) So how detail oriented you want to be with this process is up to you and your patience.

Automation pasted onto other problem areas

What you've done here is utilize all of the benefits of De-essing without blurring the crap out of your entire vocal line in the process. You are automating the effect INTO the track ONLY as needed.

Now for some examples from The Last Thing I Ever Do, one of the tracks off my upcoming album. Here are the various steps I got to as I walked through the process listed above:

Hope that all makes sense!

Eliminate noise in your project

Now that's just treating the space in which I work. What about the project itself?

I spend much of my production time cleaning up tracks to make sure that all extra unnecessary noise is thrown out. In fact, this is a great task to leave for those times when you aren't feeling very musical, or it's late at night and you can't make much noise.

Take a look at this waveform. Looks nice, doesn't it?

Scan Lines vocal comp, normal view

Now let's increase the amplitude view of that waveform and see what we didn't see before.

Scan Lines vocal comp, increased amplitude view

Looking at the gaps in between each vocal take, you start to see the actual noise that fills the gaps. It may be very hard to hear, but it's there. And that noise can muck up your mix. It can mess with post-processing like compression in ways that you don't want. I mean, it's called "noise." So, I suppose, unless you are producing glitchy techno, Industrial, or Lo-Fi music, you probably don't want anything labeled "noise" to co-exist with all the other stuff you actually intended to be in your song, right?

Now you might be thinking "but it's so low level, who cares? Nobody can hear it when 30 tracks of instrumentation are playing over the top of it, and at a much louder volume relative to that noise." But there's another takeaway. Every recorded track has the potential of adding a new layer of noise into your project. Those noise layers compounded add up to something louder than one alone, and that can be very significant.

Vocal parts in particular get hours of treatment from me. For a number of reasons which I'm sure I'll go into at some point, but one big reason is the elimination of extra noise. I regularly have multiple vocal tracks in a project layered on top of each other, and the noise in between my actual singing parts includes bleed from the headphones, my breathing in between phrases, little clicks and pops between regions, my chair creaking, the sound of a bird outside my window, the list goes on and on. For example:

You hear all that in between phrases? Little breathes, clicks, lip smacking... Get it out! You don't need it in there. I spend a ton of time trimming the noise, adding fades to each side of the regions that I keep, and I'm convinced that it makes the end result even better. Here's the same clip with the edits:

Sometimes the difference is subtle, but it's almost always better! It's worth the time. It's the fit and finish that professional sounding music undergoes without fail.

Scan Lines vocal comp, noise removed and fades added