Wednesday, March 15, 2017

So You Want To Produce a Podcast? Part Five: Write Down all the Things

I'm guessing some people have noticed it's been awhile since my previous post. You may be asking what I've been doing between posts. If guessed editing a podcast for a deadline, you'd be correct. If you also guessed writing up a transcript and show notes for the same show, you'd be correct again.

Wait, both at the same time? Why on Earth would you do something like that? I'm happy to tell you.

There's a lot of information you can include with podcast episodes. Some publish the title of the show, date of the show and a brief description of the contents. Others provide a lot of detail about the episode, including a list of resources and references relevant to the episode. If you want to be comprehensive, you provide a full transcript for the show.

Let's take a step back... why provide a full transcript? As an advocate of Accessibility, I believe in making the show usable by as many people possible. For most normative users, listening to the show is sufficient, but what if you can't listen to the show? Closed captioning for podcasts is a limited technology and typically used with videocasts as opposed to audio-only podcasts. Therefore, I provide a full transcript for each "The Testing Show" episode.

Transcribing a podcast is slow, time-consuming work. What about speech recognition software? Yes, I've tried quite a few. In most cases, I get a partial response mixed with a lot of stalling and correcting large areas of text. I've experimented with using Soundflower to direct WAV audio to a text file. When it's just my voice, speaking slowly, I get a good hit rate of spoken words to transcribed text. The more speakers on a recording, the lower that hit rate. With how much time I spend editing and fixing errors that appear in the transcript, the less I experience any real time savings. Therefore, I kick it old school and manually transcribe the shows.

"Dude, you can totally farm that work out to other people". I've done exactly that on more than a few occasions. When I am far ahead of the deadline and I feel the conversation is clear and concise, I am willing to have other people (read: pay) do the transcription. To have that be effective, I need to complete audio editing at least a week before the deadline. Sometimes, that's easy to do. Other times, not so much. Real life finds ways to take away from podcast production time, especially since I don't do this full-time. If I can't guarantee a long enough lead time to have a service do the transcription, I do it myself. If you do decide to have a service do your transcription, I give high marks to "The Daily Transcriber".

Waveform editor on the left of me.
Text editor on the right.
Here I am, stuck in the middle with you ;).


Along with a transcript, I also provide what I refer to as a "grammatical audio edit" for each show. What's a grammatical audio edit? It's where I go through each statement from each speaker and remove elements that would not flow well in a written paragraph. That includes verbal tics (those "um", "ah", "like", "you know"), repeated sequences, tangents, semantic bleaching, etc. Realize, I cannot magically fix the way people speak. At a certain point, I have to let them say what they will say in their style. Any transcript will, of course, reflect this. I do a word for word scrubbing of the recorded audio. Since I'm editing by the second, simultaneously transcribing as I'm editing is a reasonable approach. I listen to a section of dialogue, edit and sequence the conversation with a reasonable cadence, and while I'm doing that, I type out (or use Apple's "dictation" option, which can be activated with the "fn fn" sequence) to write out the words recorded.

To this end, it's important that you have already done a rough edit of the podcast. You should know which sections you are going to keep and which ones you are going to "leave on the cutting room floor" and have silenced out those sections, and then run "Truncate Silence" to squeeze everything together. This way, you know the sections you are editing and transcribing will be in the finished podcast. You can always add a section back later if you change your mind, but removing a section you've already done a full edit and transcription for is frustrating. Minimize this if you can.

GEEK TRICK: If you use Audacity, you can use the Transcription tool. It slows down or speeds up the audio to a level that you determine. It has its own playback button that then plays the audio at the designated speed. It also lowers or raises the pitch of the audio, which can be an annoyance. Still, making sense of a fast passage, or listening at the pace that you type, this feature is helpful. The Transcription tool can also, in fast playback mode, check levels between speakers.

Audacity's Transcription Tool. Slow Down or Speed Up audio.


"Dude, that's overkill". It certainly might be. If you don't want to provide a full transcript, you don't have to. Clear and interesting show notes and a catchy embedded description with the show will do a lot to help get the point across about each episode. Some cool examples of embedded show notes for episodes are the "Back 2 Work" and "CodeNewbie" podcasts, in that they include almost all of the details of the show and resource links. Some shows include timestamps along with their show note links ("Greater Than Code" and "Ruby Rogues" are both good examples of this).

Something I would also encourage if you want to go the route of detailed show notes is to develop the notes while the show is happening. That's hard if you are the only person recording the show or you a doing a one on one interview. It's easier if you have a panel of speakers. As the show runner, I try my best to keep track of what people are talking about. If I hear a comment about a talk, a video, an article, or something that I think might be helpful to reference, I jot down a quick note in my schedule sheet so I know generally where to look for it and reference it.

GEEK TRICK: Here's my basic method for transcribing and writing show notes.

1. Create a header. In that header, make a list of everyone speaking on the show. Confirm name spelling and pronunciation, etc. This way, it's easier to know who you are listening to and how to tag each line of speech.

2. Create a Macro to replace your regular contributors, and add new names as you choose. For this, I put an initial and a colon for each full name, such as "ML: " (yes, preserve the space ;) ). When I'm finished editing, I run the macro and it goes a find and replace for all of the "ML: " tags and replaces them with "MICHAEL LARSEN: ". Same for all of the other names I've gathered. One run and done.

3. I use "Insert Endnote" option each time I come across something I want to provide as a show note reference/resource. This creates a running list of resources at the end of the document. If I have the link to the reference, I include it while I am in edit mode. If I don't or I'm offline at the time (often since I do a lot of the editing and transcribing while I'm sitting on a commuter train) I make the list with as much detail as I can, then fill in the link later after I've had a chance to look it up.

Every show should start with a descriptive paragraph copy. It should be fun, interesting and hopefully engaging. As I stated in the first post of this series, sometimes I find this to be the most difficult part.

Some final details that I do are to add metadata tags to the podcast. At this point in time, I keep it very basic. I list the name of the show, the title of the show, the episode number, the year published, and that it is designated as a podcast. Also, to preserve the audio, I export the final podcast as Ogg Vorbis format and then convert it to MP3 using Max (which I like because it makes it simple to tag with metadata and to add cover art). From there, I upload it to the shared folder that we all use, I alert the folks at QualiTest that we have an episode ready to publish, and they handle updating their website and posting to iTunes, Libsyn and their RSS Feed.

Next time, let's talk about ways to encourage people to download, listen to and share your podcast.

Friday, March 3, 2017

So You Want To Produce a Podcast? Part Four: Connecting the Train Cars

You've sat down, set up your system, made a Skpye Call or an in-person recording and now you have recorded audio. Excellent! Now what?

Depending on what you plan to do with the show and how you did the recording, that answer can range from "absolutely nothing, I'm done" to "a beautifully orchestrated and conceptual program that flows from beginning to end." All right, that last part will definitely be subjective, but it points to a fact. The audio we have recorded is going to need some editing. There are many choices out there, ranging from simple WAV file editors all the way up to professional Digital Audio Workstations (DAW). I'm going to suggest a middle ground; it's flexible, doesn't cost anything, and has a lot of useful tools already included. Welcome to Audacity.

Hey, wait... don't go. Granted, it's been around a long time, and I'll admit it's not the sexiest of tools you could be using, and it has limitations as a real-time DAW (which can be overcome with some system tweaking, but that's out of scope for this post). Still, as a multi-track waveform management tool, Audacity has a lot going for it, and once you get used to working with it, it's remarkably fast, or at least, fast as audio editing tools go.

CAVEAT: There are a lot of wild and crazy things you could do with audio editing. There is an effects toolbox in the software that would make any gearhead musician of the 90's envious, and many of the tools require some advanced knowledge of audio editing to be useful, but I'm not going to cover those this go around. What I will talk about are the tools that a new podcaster would want to master quickly and become comfortable with.

First things first. I am a fan of independent tracks, as many as you can effectively manage. As I mentioned in my first post, if possible, I would like to get local source recordings from everyone participating in the podcast. Skype Call Recorder lets you save the call as a .MOV file, and when imported into Audacity, it will appear as one stereo track. One side will be the local speaker, and the other side will be the other caller(s). Even if you can only get one recording, I recommend this approach, and doing the following:

1. Import the MOV file into Audacity, and confirm your stereo track does have the separation between local and remote callers.
2. Split the stereo track into two mono tracks.
3. Select Sync-Lock tracks. This way, any edit you make that inserts or subtracts time from the one track will be reflected in the other track.
4. Look for what should be silent spots. In between people talking, there should be a thin flat bar. If you have that flat thin bar, great, it means there are little to no artifacts. Unfortunately, what you are more likely to see are little bumps here and there. Fortunately, they are easy to clean up. Just select the section of the track, highlight the area you wish to silence (you can also use the keyboard arrow keys to widen or narrow the selected area), and then press Command-L. Any audio that was in that region is now silenced.

By doing this, it is possible to clean up a lot of audio artifacts. Do make sure to look at them, though and ensure that they are just random audio captures and not your guest stepping away from the microphone but still speaking about something important. Granted, that's usually handled at the time of recording, and as the producer, you need to be alert to that. If you receive a recording where you weren't there, then you don't have that option, and really have to make sure you have listened to those in between spaces.

Before we get too deep into the editing of the main podcast audio, I want to step back and talk about the "atmosphere" you set for your show. Most podcasts have little elements that help set the mood for the show, in the form of intros and outros, messages, and what will likely be frequently mentioned items to each podcast. You may choose to do this differently each time, or create a standard set of "audio beds" that can be reused. For the Testing Show, I do exactly that. I have what I call an "Assembly Line" project. It contains my show's opener (theme music and opening words) as well as the show's closer (again, theme music and parting words). These sections, for most episodes, are exactly the same. Therefore, it makes sense to have them together and synchronized. It's possible that these could be mixed down into a single track, but that removes the ability to change the volume levels or make modifications. Unless I know something will always be used in the same way every time, I prefer not mixing them down into a single track. It's easier to move a volume control or mute something one week than have to recreate it.

GEEK TRICK: When you start getting multiple tracks on the same screen, it can be a pain to see what's in need of editing and adjusting, and what's already where it should be. Each track view can be collapsed so that just a sliver of the track view is visible. For me, anytime I collapse a track, that's a key that I don't need to worry about that area, at least for now. It's where it needs to be, both timing-wise and sequence-wise. It saves real estate, and frankly, you want as much visible real estate as possible when doing waveform editing.

A typical edit flow, showing tracks that are situated and ready versus what I am actively examining/editing.


In the first post in this series, I mentioned that I would silence audio first. Rather than delete sections outright, I'd highlight them and Silence Audio. I do this because it lets me do a rough shaping of the show quickly, and then I can handle removing all of the silence in one step. To do this, select "Truncate Silence" from the Effect Menu:
One of my favorite tools, it saves a lot of time.
The dialog box that appears will give you the option to set an audio level that Audacity will consider anything quieter than to be considered "silence". It will also give you a limit that it will consider anything beneath that value to be acceptable, and only look for silence longer than the value entered. In my experience, natural conversation flow allows anywhere from a half a second to a second for transitioning between speakers, so my default value is half a second (if it feels rushed, I can always generate silence to create extra space). The utility then takes any silence sections more than half a second and cuts out those sections. That will leave you with a continuous stream of audio where the longest silence is half a second.



GEEK TRICK: This comes from music, and specifically, it's looking for the "musicality" of speech patterns. Everyone talks a little differently. Some are faster, some are slower. Some speak in quick bursts and then pause to reflect. Others will be fairly steady but keep talking without noticeable breaks. Nevertheless, most people tend to stick to a pattern when they speak. Most people generally pause about 0.2 seconds for where a comma would appear, 0.3 seconds for a period, and 0.5 seconds for a new paragraph (or to catch their breath). A friend of mine who used to work in radio production taught me this technique of "breathless read through", which isn't really breathless, but rather silencing breaths, but allowing for the time it would take for the breath to occur. In short, speech, like music, needs "rest notes" and different values of rest notes are appropriate. Try it out and see if it makes for a more natural sound.

No matter how well you try to edit between a speaker's thoughts, you run the risk of cutting them off mid vocalizing. Left as is, they are noticeable clicks. They are distracting, so you want to smooth those out. Two utilities make that easy; Fade Out and Fade In. Simply highlight the end or beginning of the waveform, making sure to highlight right to the end or start of the section you want to perform the fade (these are in reality very short segments) and then apply the fade-out to the end of the previous word, and apply the fade-in to the start of the following word. This will take a little practice to get to sound natural, and sometimes, no matter how hard you try, you will not be able to get a seamless transition, but most of the time it is effective.

After highlighting an area to silence, you can shorten the space to flow with the conversation.

Select the ending of a waveform segment, and then choose Fade Out from the Effect menu.

Same goes for fading into a new waveform, but choose Fade In for that.

This technique is often jokingly referred to as the "Pauper's Cross Fade".

GEEK TRICK: Use the running label track or as many as you need to remind you of things you have done that you feel may warrant follow-up or additional processing. Also, using multiple comment tracks can help you sync up sections later.

Sometimes you will have to amplify or quiet someone's recording. I have experimented with a number of approaches with this over the years, and I have decided that using the Leveling effect, while helpful, messes with the source audio too much. The transitions between speakers will be noticeably more "hissy". With separate tracks for each speaker, this isn't an issue. Increasing or decreasing the track volume is sufficient. However, if your guests are all on the same track or channel, that's not an option. My preferred method in these cases is to use "Normalization", in which I set a peak threshold (usually +/- 3dB) and then select a section of a waveform and apply the Normalization to it. That will either increase or decrease the volume of that section, but it will do so with a minimum of added noise. Again, this is one of those areas where your ears are your friend, so listen and get a feel for what you personally like to hear. Caveat: this will no work on clipped audio. Unlike analog recording where running a little hot can make a warm sound on tape, in digital recording, you have space, and then you clip. If you clip, you will get distorted audio. Normalization or lowering the volume will not help. In short, if you hear someone speaking loud and hot, and you suspect they may be clipping the recording, ask them to move back from the microphone and repeat what they said.

OK, so there it is. Not too big a set of tools to learn, is it? You will note that I have covered these areas as individual steps, and as manual active editing. Can you automate steps? You can, but I've found that there are only a few things that make it worthwhile, and they need to be steps you would perform in sequence for a section or whole file. In Audacity, these sequences are called "Chains" and you can create and edit them by selecting "Edit Chains" from the File menu. I have found that there are a lot of unpredictables with audio. Thus, I encourage active listening rather than relying on the machine to process the audio directly. One you get a handle on the things you know you will do a lot, and that you know will be effective with minimal chance of backfiring, go nuts!

Next time, I will talk about packaging your podcast, including tagging, formatting, art for episodes, show notes, transcripts and all the fun meta-data you may or may not want to keep track of with each episode.

Thursday, March 2, 2017

So You Want To Produce a Podcast? Part Three: A Space to Create

You have decided on a topic. You have decided on who will participate in creating your podcast. By yourself or with others, the next step is the same. You need a space to create, record and produce. They take many forms, and mine is not the be all and end all. In fact, it's very much in a state of flux at the moment, but that's perfect. It means I can show you bare bones techniques as well as hint at future enhancements.

Making the most of a loft bed's space.

Monitor, microphone, laptop, and always something to drink nearby.
Yes, that is Neville Longbottom's wand. Why do you ask ;)?

A motley assortment of pillows, blankets, towels, and a fold-out sleep pad.
Comfortable?
Yes.
Sound deadening?
Now you're catching on ;). 


For starters, here's a look at my "creative space". I have a small bedroom/office that's what we lovingly refer to as "the island of misfit toys room" in that it has an odd shape due to being over a stairwell. It's a small room, and space comes at a premium, so recently I invested in a simple loft bed with an under-the-bed desk. This particular version comes courtesy of Ikea (their Sväta Loft Bed and Desk). The bed is helpful for being a crash point during late nights or early mornings when I either need to be up very late or very early and don't want to disturb the rest of the house. The bed also offers another interesting benefit; the pillows, blankets, and towels on top of it are actively used when I record a show. Curious? I was hoping you might be :).

If you own a MacBookPro, you may be surprised to learn that it has a good quality built in microphone. You could record with it, but you'd need to be extra careful to isolate the laptop, and then invest in a separate wireless keyboard and mouse so as to not disturb it or make any sudden movements. For short work, such as recording intro and outro messages, or very short spoken clips, this is doable. I do it on occasion when I'm outside of my home setup and I don't want to drag everything with me. Likewise, at times like that, I use the microphone and voice recorder app on my iPhone, but that's way down there on my personal preference. To get the best sound, I prefer a dedicated microphone. More to the point, I prefer a microphone that I can move or reposition as I see fit. To that end I use a combination of the following:

Depending on the room that you are in, you may have a lot of natural reverb, or you may have carpet and drapes that help to muffle ambient sounds. I like to go a couple of steps further in that I take several pillows and a fold out sleeping pad to make a much more "acoustically dead" space for when I speak. I also take a large towel and I put it over my hands when I use the keyboard and mouse. It doesn't muffle all sound, but it quiets those movements down a lot. Remember, the fewer artifacts that you record, the less tinkering you will have to do later with silencing or trying to even out the sound recording.

A few strategically placed pillows, a sleeping pad and a towel, and we are off to the races!


Another very important piece of equipment to have, at least to me, is a good full ear set of studio monitors (or you can be normal and just call them headphones ;) ). I've been using the same pair for many years (Audio-Technica ATH-M40fs Studiophones), and I hope they never break. They cup the ear and they are great at preventing sound leakage. That's important when you get up close to a microphone, as you don't want other people's conversations to be invading your recording. I also like this particular model because it has a long cord, so if I need to get up to move something, I don't have to take them off mid recording.

Blue Snowball ice, Ringer Shock Mount, Audio-Technica Studiophones and a pop filter, all mounted to a Rode Swing Arm.


GEEK TRICK: There are two enhancements that are on the horizon for me. First is that, while the pillows and pad are effective and cheap options, they take up a fair amount of space and are not very customizable. To that end, a project I have in the works is to make a set of hanging "gobos", or pieces of sound deadening material that I can position exactly where I need them to provide optimum sound isolation. The second is to get something larger than my iPhone that I can use when doing on-air fact checks or reference checks. The keyboard works, as does the mouse, but again, noisy. What is much less so? An iPad or Android tablet. Plenty of room to see what I need to direct questions or comments, but very little noise to pick up on the microphone.

So there you have it. In case you have ever wondered what my current recording space looks like, at the moment, it looks like this. I hope to soon show some improvements and enhancements, but for a new podcaster, this method is effective, inexpensive, and takes up little space. Plus, you get to sleep on it later :). Three cheers for multitasking!

Do you like the approach? Do you think it's silly? Can you suggest improvements? If so, I'd love to hear from you. Please feel free to leave comments below. Next time, we'll talk about what you do after you've recorded your masterpiece.