Showing posts with label The Testing Show. Show all posts
Showing posts with label The Testing Show. Show all posts

Wednesday, December 8, 2021

The Accessibility Mindset: The Latest Episode of The Testing Show is Up

 Another episode of The Testing Show is now live. As it looks like the embed function works I am going to keep on with this methodology :).



This episode is focusing on "The Accessibility Mindset?" and yes, the question mark is intentional, as in this episode we are specifically asking what it is and how to go about using it. 

As I mentioned in the intro to the show, I think too often Accessibility is limited to making sure a screen reader can navigate a site and to make sure alt tags for images are in place. I think we should be ashamed of ourselves if that is the extent of our involvement, focus, and interest. This is why I was interested in hearing what Aditya Bangari and Riya Sharma had to say about approaching projects not just with Accessibility as a goal in mind, but to do so with a literal "Accessibility Mindset". Along with Accessibility is the complementary ideal of Inclusive Design, where we work towards making sites and services more usable and effective for everyone.

If I have piqued your interest, click play and have a listen. If you like what you hear, please feel free to leave a comment about the show below. If you don't like what you hear, then definitely leave a comment ;). If you really like what you hear, consider subscribing via Apple Podcasts, Google Podcasts, or Spotify Podcasts.



 

Wednesday, November 10, 2021

Is It Testable? A New Episode of The Testing Show is Now Available

 

Image placard for The Testing Show: Episode 107: Is This Testable?

It's that time again. Another episode of The Testing Show is now available. Last week we posted "Is This Testable?" This was a fun interview that Matt and I had with Gil Zilberfeld and as an added bonus, I was able to add some input as a co-quest this go around (as Testability is one of my talking points :).


I'm trying an experiment, so I'm hoping this works. I'm embedding the show in these posts and seeing if they appear when I post them. If not, here's the link to the episode page.

As a follow-up, it was interesting to hear where Gil and I shared ideas and where we differed. Well, saying we "differ" isn't correct. More accurately it was interesting to see what we tended to prioritize of where we focused our respective attentions :).

Anyway, the show can tell you lots more than I can/should, so go have a listen, or a read if a transcript is more up your alley ;).

Friday, October 15, 2021

Listen to the QA Summit 2021 After Party (The Testing Show, Episode 106)

 The latest episode of The Testing Show dropped early this morning. 

The Testing Show Episode 106 Graphic: People at a Conference

I realized today that I hadn't shared some interesting news about the distribution of the podcast lately. We've expanded our reach

In addition to having the show on Apple Podcasts, you can now listen on Google Podcasts and on Spotify.

The Testing Show on Apple Podcasts

The Testing Show on Google Podcasts 

The Testing Show on Spotify

This show was recorded at the XPansion QASummit, an event I was added to late in the process as a speaker. As I was speaking and saw there were a number of additional participants I knew and worked with on the Podcast, I decided to pack my microphone along and record two episodes of the podcast live at the event. 

Our previous episode, with was billed as the "Pre Game show", featured myself and Matt Heusser interviewing our friends Gwen Iarussi and Rachel Kibler about talk, expectations, and areas of interest.

This latest episode is about our experiences just after the conference ended. Since this was recorded live, there is a fair amount of background noise but that gives you a bit of a feel of the event itself. 

Both Gwen and Rachel joined us for the after-party episode and we had the pleasure of meeting Pax Noyes, who joined us to talk about the conference and initiatives she is active with, notably "QA at the Point".

Please go have a listen and let us know what you think. Drop a comment and let us know what you'd like to hear us talk about next.

Friday, August 17, 2018

The Testing Show - Conferences and Conferring with Anna Royzman, Claire Moss and Mike Lyles

I have been terrible with my shameless self-promotion as of late, but I think it's time I switch that up a bit. Remember my comment about that "grind" earlier in the week? Here's the end result and yes, I always feel better about it after it's all said and done because I enjoy seeing the end product and most importantly sharing it with everyone out there.



I have a little favor to ask as well if you are so inclined. Do you enjoy listening to The Testing Show? If you do, could I ask you to go to Apple Podcasts and write us a review? Reviews help people find the podcast and make it more likely to appear in the feed listings. Seriously, we all would really appreciate it. We work hard to bring these out and I'd love nothing better than to have more people be able to find it.



In any event, we hope you enjoy this month's episode. To set things up, here's the lead in for the show:

August is a busy time of year for software testing conferences (not to mention conferences in other industries). This month, we decided that, with everyone heading off to conferences hither and yon that we would dedicate a show to the topic, and we have done exactly that. Anna Royzman (Test Masters Academy), Claire Moss (DevOpsDays) and Mike Lyles (Software Test Professionals) join us as guests in their capacity as conference organizers, speakers and attendees (not necessarily in that order) to riff on Conferences and Conferring with Matthew Heusser, Michael Larsen, and Perez Ababa. Want to know where to go, what format to take part in or if you want to try your hand at speaking/presenting? We’ve got something for all those bases!

The Testing Show - Conferences and Conferring with Anna Royzman, Claire Moss and Mike Lyles: The Testing Show team discusses the QA conference season with Anna Royzman, Claire Moss and Mike Lyles. Tune in to learn more!


Thursday, January 4, 2018

The Testing Show: CodeNewbie With Saron Yitbarek, Part 1

Happy New Year everyone!

I'd like to present the newly retooled The Testing Show. New theme music (courtesy of my band Ensign Red) and what I hope will be a new streamlined format for the show. I've learned a thing or two about doing audio the past few years and I'm hoping to see us transition a little bit to a broader storytelling approach along with the regular interviews that we do.

To that end, that special guest I was talking about a few weeks back is Saron Yitbarek, the mastermind behind the CodeNewbie website, podcast, Twitter chats and recently the producer of the BaseCS podcast as well as running the Codeland development conference.

I joke during the intro in this show that I feel like a bit of a fanboy here, but seriously, I have wanted to interview Saron for a long time. I was a little nervous asking if she'd be on our show with her level of visibility, so I was overjoyed when she said "yes" and even more so at the natural conversation that we had. She's not just a great interviewee, she's an excellent interviewer as well, so there was a really fun give and take on this show. To that end, I likewise decided that my traditional heavy grammatical editing style wasn't suited for this conversation. Some of the audio may sound a little less slick by TESTHEAD standards, but I feel it adds to the immediacy and excitement of the conversation. I'm not kidding when I say I was a bit giddy at a few spots in this episode.

All right, fanboy gushing aside, this episode covers what I think is interesting ground. Saron is perhaps one of the few guests who has never identified as a software tester, but she totally gets testing. What's more, she totally gets the frustration of getting up the courage to commit to learning how to write code (and yes, it takes courage to do it). It takes courage to be continuously frustrated. She also shares a lot of her ups and downs and frustrations that she has had during her own journey, and how she uses that as fuel to help support others on their coding journeys.

We recorded for almost two hours, and it has been a struggle to decide what to keep the focus on for these interviews. I'm hoping I've captured the best of the conversation, but I'll leave that to you all to decide.

If you enjoy listening to The Testing Show, I'd like to ask you a favor. Please go to Apple Podcasts and give us a rating. If you feel we deserve five stars, please give it to us :). If you feel we deserve less, that's fine too, but please leave a review and tell us why you feel that way. Give us a review as to why you think we deserve five stars while you are at it :). We aim to make The Testing Show the best podcast we can and if you have thoughts about how we can make it better, as the producer, I'm definitely interested.



The Testing Show: CodeNewbie With Saron Yitbarek, Part 1: The Testing shows talks about the process of learning how to code, so we talk with Saron Yitbarek about where and how to start. Tune in to learn more!

Wednesday, December 20, 2017

The Testing Show: Hiring and Getting Hired

It's been a big year for The Testing Show and this is the last episode of the year that is 2017. We were happy to have Gwen Dobson join Jessica Ingrassellino, Matt Heusser and me to talk about the changes that have taken place in the testing market over the past few years.

We riffed on a number of topics including the laws that prohibit asking about salary histories, having that discussion about money and making the best case for your worth, marketing your skill set and leveraging the variety of platforms at our disposal to help sell ourselves and our personal brands.

It's been a great deal of fun to produce and participate in this podcast and I'm looking forward to the new topics and guests we will have in 2018. I am actively working on a two-parter for January with a special guest that you are just going to have to wait and see/hear who it is, but I can say I've wanted to interview this person for a long time and I'm excited about presenting these episodes, along with some other changes for the show in 2018.

With that, please jump in and have a listen to
The Testing Show: Episode 50: Hiring and Getting Hired:

Wednesday, October 11, 2017

Machine Learning Part 2 With Peter Varhol: The Testing Show

As has become abundantly clear to me over the last several weeks, I could be a lot more prolific with my blog posts if I were just a little bit better and more consistent with self-promotion. Truth be told, a lot of time goes into editing The Testing Show. I volunteered a long time ago to do the heavy lifting for the show editing because of my background in audio editing and audio production from a couple decades back. Hey, why let those chops go to waste ;)? Well, it means I don’t publish as often since, by the time I’ve finished editing a podcast, I have precious little time or energy to blog. That is unless I blog about the podcast itself… hey, why not?


So this most recent episode of The Testing Show is “Machine Learning, Part 2” and features Peter Varhol. Peter has had an extensive career and has also done a prodigious amount of writing. In addition, he has a strong mathematical background which makes him an ideal person to talk about the proliferation of AI and Machine Learning. Peter has a broad and generous take on the current challenges and opportunities that both AI and Machine Learning provide. He gives an upbeat but realistic view of what the technologies can and cannot do, as well as ways in which the tester can both leverage and thrive in this environment.




Anyway, I’d love for you to listen to the show, so please either go to the Qualitest Group podcast page or subscribe via Apple Podcasts. While you’re at it, we’d love it if you could leave us a review, as reviews help bubble our group higher in the search listings and help people find the show. Regardless, I’d love to know what you think and comments via this page are also fine.

Wednesday, March 15, 2017

So You Want To Produce a Podcast? Part Five: Write Down all the Things

I'm guessing some people have noticed it's been awhile since my previous post. You may be asking what I've been doing between posts. If guessed editing a podcast for a deadline, you'd be correct. If you also guessed writing up a transcript and show notes for the same show, you'd be correct again.

Wait, both at the same time? Why on Earth would you do something like that? I'm happy to tell you.

There's a lot of information you can include with podcast episodes. Some publish the title of the show, date of the show and a brief description of the contents. Others provide a lot of detail about the episode, including a list of resources and references relevant to the episode. If you want to be comprehensive, you provide a full transcript for the show.

Let's take a step back... why provide a full transcript? As an advocate of Accessibility, I believe in making the show usable by as many people possible. For most normative users, listening to the show is sufficient, but what if you can't listen to the show? Closed captioning for podcasts is a limited technology and typically used with videocasts as opposed to audio-only podcasts. Therefore, I provide a full transcript for each "The Testing Show" episode.

Transcribing a podcast is slow, time-consuming work. What about speech recognition software? Yes, I've tried quite a few. In most cases, I get a partial response mixed with a lot of stalling and correcting large areas of text. I've experimented with using Soundflower to direct WAV audio to a text file. When it's just my voice, speaking slowly, I get a good hit rate of spoken words to transcribed text. The more speakers on a recording, the lower that hit rate. With how much time I spend editing and fixing errors that appear in the transcript, the less I experience any real time savings. Therefore, I kick it old school and manually transcribe the shows.

"Dude, you can totally farm that work out to other people". I've done exactly that on more than a few occasions. When I am far ahead of the deadline and I feel the conversation is clear and concise, I am willing to have other people (read: pay) do the transcription. To have that be effective, I need to complete audio editing at least a week before the deadline. Sometimes, that's easy to do. Other times, not so much. Real life finds ways to take away from podcast production time, especially since I don't do this full-time. If I can't guarantee a long enough lead time to have a service do the transcription, I do it myself. If you do decide to have a service do your transcription, I give high marks to "The Daily Transcriber".

Waveform editor on the left of me.
Text editor on the right.
Here I am, stuck in the middle with you ;).


Along with a transcript, I also provide what I refer to as a "grammatical audio edit" for each show. What's a grammatical audio edit? It's where I go through each statement from each speaker and remove elements that would not flow well in a written paragraph. That includes verbal tics (those "um", "ah", "like", "you know"), repeated sequences, tangents, semantic bleaching, etc. Realize, I cannot magically fix the way people speak. At a certain point, I have to let them say what they will say in their style. Any transcript will, of course, reflect this. I do a word for word scrubbing of the recorded audio. Since I'm editing by the second, simultaneously transcribing as I'm editing is a reasonable approach. I listen to a section of dialogue, edit and sequence the conversation with a reasonable cadence, and while I'm doing that, I type out (or use Apple's "dictation" option, which can be activated with the "fn fn" sequence) to write out the words recorded.

To this end, it's important that you have already done a rough edit of the podcast. You should know which sections you are going to keep and which ones you are going to "leave on the cutting room floor" and have silenced out those sections, and then run "Truncate Silence" to squeeze everything together. This way, you know the sections you are editing and transcribing will be in the finished podcast. You can always add a section back later if you change your mind, but removing a section you've already done a full edit and transcription for is frustrating. Minimize this if you can.

GEEK TRICK: If you use Audacity, you can use the Transcription tool. It slows down or speeds up the audio to a level that you determine. It has its own playback button that then plays the audio at the designated speed. It also lowers or raises the pitch of the audio, which can be an annoyance. Still, making sense of a fast passage, or listening at the pace that you type, this feature is helpful. The Transcription tool can also, in fast playback mode, check levels between speakers.

Audacity's Transcription Tool. Slow Down or Speed Up audio.


"Dude, that's overkill". It certainly might be. If you don't want to provide a full transcript, you don't have to. Clear and interesting show notes and a catchy embedded description with the show will do a lot to help get the point across about each episode. Some cool examples of embedded show notes for episodes are the "Back 2 Work" and "CodeNewbie" podcasts, in that they include almost all of the details of the show and resource links. Some shows include timestamps along with their show note links ("Greater Than Code" and "Ruby Rogues" are both good examples of this).

Something I would also encourage if you want to go the route of detailed show notes is to develop the notes while the show is happening. That's hard if you are the only person recording the show or you a doing a one on one interview. It's easier if you have a panel of speakers. As the show runner, I try my best to keep track of what people are talking about. If I hear a comment about a talk, a video, an article, or something that I think might be helpful to reference, I jot down a quick note in my schedule sheet so I know generally where to look for it and reference it.

GEEK TRICK: Here's my basic method for transcribing and writing show notes.

1. Create a header. In that header, make a list of everyone speaking on the show. Confirm name spelling and pronunciation, etc. This way, it's easier to know who you are listening to and how to tag each line of speech.

2. Create a Macro to replace your regular contributors, and add new names as you choose. For this, I put an initial and a colon for each full name, such as "ML: " (yes, preserve the space ;) ). When I'm finished editing, I run the macro and it goes a find and replace for all of the "ML: " tags and replaces them with "MICHAEL LARSEN: ". Same for all of the other names I've gathered. One run and done.

3. I use "Insert Endnote" option each time I come across something I want to provide as a show note reference/resource. This creates a running list of resources at the end of the document. If I have the link to the reference, I include it while I am in edit mode. If I don't or I'm offline at the time (often since I do a lot of the editing and transcribing while I'm sitting on a commuter train) I make the list with as much detail as I can, then fill in the link later after I've had a chance to look it up.

Every show should start with a descriptive paragraph copy. It should be fun, interesting and hopefully engaging. As I stated in the first post of this series, sometimes I find this to be the most difficult part.

Some final details that I do are to add metadata tags to the podcast. At this point in time, I keep it very basic. I list the name of the show, the title of the show, the episode number, the year published, and that it is designated as a podcast. Also, to preserve the audio, I export the final podcast as Ogg Vorbis format and then convert it to MP3 using Max (which I like because it makes it simple to tag with metadata and to add cover art). From there, I upload it to the shared folder that we all use, I alert the folks at QualiTest that we have an episode ready to publish, and they handle updating their website and posting to iTunes, Libsyn and their RSS Feed.

Next time, let's talk about ways to encourage people to download, listen to and share your podcast.

Friday, March 3, 2017

So You Want To Produce a Podcast? Part Four: Connecting the Train Cars

You've sat down, set up your system, made a Skpye Call or an in-person recording and now you have recorded audio. Excellent! Now what?

Depending on what you plan to do with the show and how you did the recording, that answer can range from "absolutely nothing, I'm done" to "a beautifully orchestrated and conceptual program that flows from beginning to end." All right, that last part will definitely be subjective, but it points to a fact. The audio we have recorded is going to need some editing. There are many choices out there, ranging from simple WAV file editors all the way up to professional Digital Audio Workstations (DAW). I'm going to suggest a middle ground; it's flexible, doesn't cost anything, and has a lot of useful tools already included. Welcome to Audacity.

Hey, wait... don't go. Granted, it's been around a long time, and I'll admit it's not the sexiest of tools you could be using, and it has limitations as a real-time DAW (which can be overcome with some system tweaking, but that's out of scope for this post). Still, as a multi-track waveform management tool, Audacity has a lot going for it, and once you get used to working with it, it's remarkably fast, or at least, fast as audio editing tools go.

CAVEAT: There are a lot of wild and crazy things you could do with audio editing. There is an effects toolbox in the software that would make any gearhead musician of the 90's envious, and many of the tools require some advanced knowledge of audio editing to be useful, but I'm not going to cover those this go around. What I will talk about are the tools that a new podcaster would want to master quickly and become comfortable with.

First things first. I am a fan of independent tracks, as many as you can effectively manage. As I mentioned in my first post, if possible, I would like to get local source recordings from everyone participating in the podcast. Skype Call Recorder lets you save the call as a .MOV file, and when imported into Audacity, it will appear as one stereo track. One side will be the local speaker, and the other side will be the other caller(s). Even if you can only get one recording, I recommend this approach, and doing the following:

1. Import the MOV file into Audacity, and confirm your stereo track does have the separation between local and remote callers.
2. Split the stereo track into two mono tracks.
3. Select Sync-Lock tracks. This way, any edit you make that inserts or subtracts time from the one track will be reflected in the other track.
4. Look for what should be silent spots. In between people talking, there should be a thin flat bar. If you have that flat thin bar, great, it means there are little to no artifacts. Unfortunately, what you are more likely to see are little bumps here and there. Fortunately, they are easy to clean up. Just select the section of the track, highlight the area you wish to silence (you can also use the keyboard arrow keys to widen or narrow the selected area), and then press Command-L. Any audio that was in that region is now silenced.

By doing this, it is possible to clean up a lot of audio artifacts. Do make sure to look at them, though and ensure that they are just random audio captures and not your guest stepping away from the microphone but still speaking about something important. Granted, that's usually handled at the time of recording, and as the producer, you need to be alert to that. If you receive a recording where you weren't there, then you don't have that option, and really have to make sure you have listened to those in between spaces.

Before we get too deep into the editing of the main podcast audio, I want to step back and talk about the "atmosphere" you set for your show. Most podcasts have little elements that help set the mood for the show, in the form of intros and outros, messages, and what will likely be frequently mentioned items to each podcast. You may choose to do this differently each time, or create a standard set of "audio beds" that can be reused. For the Testing Show, I do exactly that. I have what I call an "Assembly Line" project. It contains my show's opener (theme music and opening words) as well as the show's closer (again, theme music and parting words). These sections, for most episodes, are exactly the same. Therefore, it makes sense to have them together and synchronized. It's possible that these could be mixed down into a single track, but that removes the ability to change the volume levels or make modifications. Unless I know something will always be used in the same way every time, I prefer not mixing them down into a single track. It's easier to move a volume control or mute something one week than have to recreate it.

GEEK TRICK: When you start getting multiple tracks on the same screen, it can be a pain to see what's in need of editing and adjusting, and what's already where it should be. Each track view can be collapsed so that just a sliver of the track view is visible. For me, anytime I collapse a track, that's a key that I don't need to worry about that area, at least for now. It's where it needs to be, both timing-wise and sequence-wise. It saves real estate, and frankly, you want as much visible real estate as possible when doing waveform editing.

A typical edit flow, showing tracks that are situated and ready versus what I am actively examining/editing.


In the first post in this series, I mentioned that I would silence audio first. Rather than delete sections outright, I'd highlight them and Silence Audio. I do this because it lets me do a rough shaping of the show quickly, and then I can handle removing all of the silence in one step. To do this, select "Truncate Silence" from the Effect Menu:
One of my favorite tools, it saves a lot of time.
The dialog box that appears will give you the option to set an audio level that Audacity will consider anything quieter than to be considered "silence". It will also give you a limit that it will consider anything beneath that value to be acceptable, and only look for silence longer than the value entered. In my experience, natural conversation flow allows anywhere from a half a second to a second for transitioning between speakers, so my default value is half a second (if it feels rushed, I can always generate silence to create extra space). The utility then takes any silence sections more than half a second and cuts out those sections. That will leave you with a continuous stream of audio where the longest silence is half a second.



GEEK TRICK: This comes from music, and specifically, it's looking for the "musicality" of speech patterns. Everyone talks a little differently. Some are faster, some are slower. Some speak in quick bursts and then pause to reflect. Others will be fairly steady but keep talking without noticeable breaks. Nevertheless, most people tend to stick to a pattern when they speak. Most people generally pause about 0.2 seconds for where a comma would appear, 0.3 seconds for a period, and 0.5 seconds for a new paragraph (or to catch their breath). A friend of mine who used to work in radio production taught me this technique of "breathless read through", which isn't really breathless, but rather silencing breaths, but allowing for the time it would take for the breath to occur. In short, speech, like music, needs "rest notes" and different values of rest notes are appropriate. Try it out and see if it makes for a more natural sound.

No matter how well you try to edit between a speaker's thoughts, you run the risk of cutting them off mid vocalizing. Left as is, they are noticeable clicks. They are distracting, so you want to smooth those out. Two utilities make that easy; Fade Out and Fade In. Simply highlight the end or beginning of the waveform, making sure to highlight right to the end or start of the section you want to perform the fade (these are in reality very short segments) and then apply the fade-out to the end of the previous word, and apply the fade-in to the start of the following word. This will take a little practice to get to sound natural, and sometimes, no matter how hard you try, you will not be able to get a seamless transition, but most of the time it is effective.

After highlighting an area to silence, you can shorten the space to flow with the conversation.

Select the ending of a waveform segment, and then choose Fade Out from the Effect menu.

Same goes for fading into a new waveform, but choose Fade In for that.

This technique is often jokingly referred to as the "Pauper's Cross Fade".

GEEK TRICK: Use the running label track or as many as you need to remind you of things you have done that you feel may warrant follow-up or additional processing. Also, using multiple comment tracks can help you sync up sections later.

Sometimes you will have to amplify or quiet someone's recording. I have experimented with a number of approaches with this over the years, and I have decided that using the Leveling effect, while helpful, messes with the source audio too much. The transitions between speakers will be noticeably more "hissy". With separate tracks for each speaker, this isn't an issue. Increasing or decreasing the track volume is sufficient. However, if your guests are all on the same track or channel, that's not an option. My preferred method in these cases is to use "Normalization", in which I set a peak threshold (usually +/- 3dB) and then select a section of a waveform and apply the Normalization to it. That will either increase or decrease the volume of that section, but it will do so with a minimum of added noise. Again, this is one of those areas where your ears are your friend, so listen and get a feel for what you personally like to hear. Caveat: this will no work on clipped audio. Unlike analog recording where running a little hot can make a warm sound on tape, in digital recording, you have space, and then you clip. If you clip, you will get distorted audio. Normalization or lowering the volume will not help. In short, if you hear someone speaking loud and hot, and you suspect they may be clipping the recording, ask them to move back from the microphone and repeat what they said.

OK, so there it is. Not too big a set of tools to learn, is it? You will note that I have covered these areas as individual steps, and as manual active editing. Can you automate steps? You can, but I've found that there are only a few things that make it worthwhile, and they need to be steps you would perform in sequence for a section or whole file. In Audacity, these sequences are called "Chains" and you can create and edit them by selecting "Edit Chains" from the File menu. I have found that there are a lot of unpredictables with audio. Thus, I encourage active listening rather than relying on the machine to process the audio directly. One you get a handle on the things you know you will do a lot, and that you know will be effective with minimal chance of backfiring, go nuts!

Next time, I will talk about packaging your podcast, including tagging, formatting, art for episodes, show notes, transcripts and all the fun meta-data you may or may not want to keep track of with each episode.

Thursday, March 2, 2017

So You Want To Produce a Podcast? Part Three: A Space to Create

You have decided on a topic. You have decided on who will participate in creating your podcast. By yourself or with others, the next step is the same. You need a space to create, record and produce. They take many forms, and mine is not the be all and end all. In fact, it's very much in a state of flux at the moment, but that's perfect. It means I can show you bare bones techniques as well as hint at future enhancements.

Making the most of a loft bed's space.

Monitor, microphone, laptop, and always something to drink nearby.
Yes, that is Neville Longbottom's wand. Why do you ask ;)?

A motley assortment of pillows, blankets, towels, and a fold-out sleep pad.
Comfortable?
Yes.
Sound deadening?
Now you're catching on ;). 


For starters, here's a look at my "creative space". I have a small bedroom/office that's what we lovingly refer to as "the island of misfit toys room" in that it has an odd shape due to being over a stairwell. It's a small room, and space comes at a premium, so recently I invested in a simple loft bed with an under-the-bed desk. This particular version comes courtesy of Ikea (their Sväta Loft Bed and Desk). The bed is helpful for being a crash point during late nights or early mornings when I either need to be up very late or very early and don't want to disturb the rest of the house. The bed also offers another interesting benefit; the pillows, blankets, and towels on top of it are actively used when I record a show. Curious? I was hoping you might be :).

If you own a MacBookPro, you may be surprised to learn that it has a good quality built in microphone. You could record with it, but you'd need to be extra careful to isolate the laptop, and then invest in a separate wireless keyboard and mouse so as to not disturb it or make any sudden movements. For short work, such as recording intro and outro messages, or very short spoken clips, this is doable. I do it on occasion when I'm outside of my home setup and I don't want to drag everything with me. Likewise, at times like that, I use the microphone and voice recorder app on my iPhone, but that's way down there on my personal preference. To get the best sound, I prefer a dedicated microphone. More to the point, I prefer a microphone that I can move or reposition as I see fit. To that end I use a combination of the following:

Depending on the room that you are in, you may have a lot of natural reverb, or you may have carpet and drapes that help to muffle ambient sounds. I like to go a couple of steps further in that I take several pillows and a fold out sleeping pad to make a much more "acoustically dead" space for when I speak. I also take a large towel and I put it over my hands when I use the keyboard and mouse. It doesn't muffle all sound, but it quiets those movements down a lot. Remember, the fewer artifacts that you record, the less tinkering you will have to do later with silencing or trying to even out the sound recording.

A few strategically placed pillows, a sleeping pad and a towel, and we are off to the races!


Another very important piece of equipment to have, at least to me, is a good full ear set of studio monitors (or you can be normal and just call them headphones ;) ). I've been using the same pair for many years (Audio-Technica ATH-M40fs Studiophones), and I hope they never break. They cup the ear and they are great at preventing sound leakage. That's important when you get up close to a microphone, as you don't want other people's conversations to be invading your recording. I also like this particular model because it has a long cord, so if I need to get up to move something, I don't have to take them off mid recording.

Blue Snowball ice, Ringer Shock Mount, Audio-Technica Studiophones and a pop filter, all mounted to a Rode Swing Arm.


GEEK TRICK: There are two enhancements that are on the horizon for me. First is that, while the pillows and pad are effective and cheap options, they take up a fair amount of space and are not very customizable. To that end, a project I have in the works is to make a set of hanging "gobos", or pieces of sound deadening material that I can position exactly where I need them to provide optimum sound isolation. The second is to get something larger than my iPhone that I can use when doing on-air fact checks or reference checks. The keyboard works, as does the mouse, but again, noisy. What is much less so? An iPad or Android tablet. Plenty of room to see what I need to direct questions or comments, but very little noise to pick up on the microphone.

So there you have it. In case you have ever wondered what my current recording space looks like, at the moment, it looks like this. I hope to soon show some improvements and enhancements, but for a new podcaster, this method is effective, inexpensive, and takes up little space. Plus, you get to sleep on it later :). Three cheers for multitasking!

Do you like the approach? Do you think it's silly? Can you suggest improvements? If so, I'd love to hear from you. Please feel free to leave comments below. Next time, we'll talk about what you do after you've recorded your masterpiece.

Tuesday, February 28, 2017

So You Want to Produce A Podcast? Part Two: Topic and Control

For each of the areas in my introductory "5000-foot view" post, I will take some time and dig deeper into them and provide a little closer look at each region. Warning, this may get messy as we continue.

All successful podcasts tend to hang on a central theme. That theme can vary wildly and cover a  lot of different areas, but there should be some anchor point that you as a show producer can draw on. Some of my favorite "wide net" podcasts are "Stuff You Missed in History Class" and "Stuff You Should Know" both from HowStuffWorks.com. Their format allows them to cover a huge array of topics, and with that, there's the high probability that such a broad range will not hook every listener. By contrast, a focus on a very specific niche area, like Joe Colantonio's "Test Talks" Test Automation Podcast) or Ruby Rogues, means that you are likely to be hitting areas that you are familiar with and comfortable with, but also run the risk of hearing things you have heard before. In my opinion, the more niche the show, the less likely one or two people will be able to do it justice. Unless you are a genuine super-nerd on the area in question, and you can go into great depth on the topics you are covering, it's much harder to produce solid shows with only one or two people talking. Guest contributors make this kind of a podcast essential. By contrast, covering something broad or that changes week to week means that you can riff on it and provide your own thoughts and opinions and keep things fresh.

Regardless of the topic areas that you choose to cover, you will have to do your homework for any given show. If you invite a guest on the podcast, it's always a good idea to do some research on what they've done in the past, what their level of expertise is, how they acquired it, and get them to open up and share their journey to where they are today. Even if you are not appearing "on mic", if you have thoughts or ideas, or questions you would like to get covered, make sure the participants know about them. If you have the time to do so, compile a list of questions you'd like to ask your guest and send them to them in advance. You don't have to go by a set script, but by giving them heads up on the areas that you are interested in, they can answer back and either give you more areas to consider and broaden your questions or let you know ahead of time where there might be better resources for those questions. Either way, it helps you tailor what you ask your guests or, barring that, what you choose to ask yourself as you discuss those areas.

As a show producer, if you are appearing on air, one of your jobs is to be the show runner. With "The Testing Show", I usually defer to Matt Heusser when he's on to be the moderator of the discussions, but I take the lead to manage the topics and "watch the clock". This is an important part of topic and voice, and that's to make sure that all participants keep focused. That's not to say that a good conversation should be abruptly stopped, but as show runner, I need to make sure that I'm not going to have to be heavy handed about editing, so I tend to remind people of our timing.

Many podcasts are free-form, and they go as long as they feel like going. Some people do this better than others. One of my long time favorite podcasts is "Back to Work" with Merlin Mann and Dan Benjamin. Suffice it to say, every episode is long, an hour plus, sometimes as much as two hours. There is a lot of banter in their shows like two old pals just talking about whatever comes to mind. They use a lot of inside jokes, and it takes awhile to get to the topics, but they do so, and they make it engaging. Other shows I have heard don't do this quite as well. It's important to decide on your tempo, and how you want to achieve it. The Testing Show has settled on a general format, and we reached this format after discussions with our sponsor, QualiTest. We decided that the average show length would be 30 minutes. Sometimes they are shorter, sometimes a little longer, but generally, we aim for that sweet spot of 30 minutes. We also tend to have a "news segment" at the start of each show, which allows us the opportunity to be open and casual discussing a current-ish topic. We post new episodes every two weeks. That makes what we think is topical sometimes seem like old news by the time it appears. With that in mind, we aim to cover news items that have broader messages and takeaways. That leaves us with twenty minutes to talk about the main topic. The advantage of a set times show is that we can keep the message tight and focused. The disadvantage, at times, is that it can sound shallow or not go into the depth that some listeners would want. We take these on a case by case basis, and if a topic is really deserving, or covers enough unique ground to warrant it, we will do a two-part episode. Generally, though, I prefer to keep the shows as standalone entities, so I try my best to help make sure that heavy editing is not required. Sometimes I succeed. Often, I don't, but that's its own post ;).

GEEK TRICK: I encourage anyone producing a show to keep a production schedule on hand. This is key as you go through recording episodes. Based on the queue of recordings, and your posting schedule, when will this current recording appear? We have had experiences where we have been very lean with guests. A recording we make might get turned around and uploaded to the feed within a week (definitely stressful, but the content is fresh and current). At other times, we have a backlog of recordings to work through. We have had shows recorded that didn't get posted as episodes until two months later. As a show producer looking at this schedule and being aware of the post dates for any potential recording can help you decide if it makes sense to record a show on a given date or wait until later. Sometimes, a guest has a really tight schedule and you will only get them for that time period. If that's the case, you may need to shuffle your backlog a little, so that that special guest can appear in your episode stream sooner rather than later. Additionally, when guests or panelists are talking about upcoming events, it really helps to have a clear window as to when the show will air. It makes no sense to talk about an appearance or an event that will have already happened. One exception to this is if the materials for presentations are going to be made public by the time the podcast appears. You can then make a note of their location in the show's transcription and show notes.

Another thing a producer should consider is if they will be willing to "correct" items "in post" (meaning as they are editing, they notice something that's either incorrect/outdated, or could use clarification). Personally, my preferred method is to use the show transcript and notes for this purpose, but occasionally, something will stick out. To avoid confusion, or add clarity, it will make sense to drop a comment into the show mid-stream. I recently did this with regards to a discussion that we were having about Socialtext current CI system. Though I wasn't on that show, there had been enough changes that I felt it warranted to drop a small voice recording in and clarify what we are doing currently.

It may take awhile for your show to "find its voice". It's tempting to think that you will go live immediately with your first recording as your first episode. I strongly encourage you to record a "Pilot" podcast first, to see if you like the format, and to see what you will have to do to produce the show you want to produce. We did this with "The Testing Show" and decided that the first recording was just too rough to go live with. We still have it, and at some point, we may post it as an example of "first steps" to show what we started with, and how we made changes over the subsequent episodes. I have heard several people make the recommendation to record five episodes, and then record episode six and label that one as the first episode, discarding the previous five. If you are a one or two-person podcast with the same people each time, that makes sense. If you are asking for guests to appear on your show, it's not cool to take up their time and then not post their contributions. Since we decided from the outset to be a guest driven show, we went live with episode two.

Are there any other questions or things you wonder about when it comes to developing the voice of your podcast, and how to "run the show" once it's recording time? If so, please leave questions below, and I will be happy to answer them, expand on them in this post, or write new posts to talk about them.

Monday, February 27, 2017

So You Want To Produce a Podcast? Part One: The 5,000 Foot View

I need to thank Saron Yitbarek. She posted a tweet that had me laughing last night, as she was talking about how she needed to edit a podcast, but that she was procrastinating the editing, and she wanted to write a blog post about podcasting instead.


Please forgive me misspelling #TheTestingShow in my reply. I was a little tired. Still, her Tweet has been rattling around in my head all day. What if someone who listens to The Testing Show wanted to start their own podcast. What would I tell them? How should they start? What should they do? How do you actually produce a show? Could I put this all into one post?

I decided the answer to that was "probably not", but I could do a series of posts about what I do as a quick and dirty tutorial about my approach as to how I edit and produce The Testing Show, so if you will indulge me, that is exactly what I am going to do.

First, you need a show, and to have a show, you need a topic. It helps if the topic is something you are nerdy about, has a direct connection to your professional life, or is otherwise something you would climb a mountain of people to do and, thus, talk about it with a passion. Hyperbole much? Maybe, but seriously, if you dig the topic in question, you will be motivated to make many installments, and that tends to encourage the commitment to keep making episodes.

Second, you need to decide how you want to present your show. Do you want to be a first person podcast, where you do all the talking? Don't dismiss this, as several of my favorite podcasts are exactly this. Dan Carlin's shows Hardcore History and Common Sense, Stephen West's Philosophize This, most of the podcasts on QuickandDirtyTips.com, and a new one that I am currently enjoying called The Ends, all of these shows have a single speaker/narrator and work well in this format. Upside: you can produce shows anytime you want to. Downside: it's all on you, and that means you have to provide all of the research, commentary, and thoughts. An interview or panel show add the elements of additional speakers and commentary. Upside: you get a variety of viewpoints. Downside: your ability to schedule everyone to record can be a logistical challenge, and the more people you want on the call, the greater the challenge. For The Testing Show, we have a rotating recurring cast. The general rule is that three of us together meet the minimum requirements to record a podcast on a topic, or two of us and a guest also meet that requirement. In a pinch, if we are remote or at an event, we will do a one on one interview with someone, but we strive to get three people into the conversation so there's some give and take.

Third, how do you want to capture your audio? For the time being, I will leave video out of the mix. I will also leave in person recording out (though the Voice Recorder apps on iPhone and Android are actually quite good, and I have used them in live settings to great effect). We're talking just an audio recording done with remote participants. If you'd like an inexpensive, but quite reliable, recording method, I recommend using Skype and the ecamm Skype Call Recorder. There are numerous recording options, but for now, focus on the voice-only option. The calls are saved as .mov files, and each file can be imported into your audio editor of choice.

Fourth, you need an audio editor. Which one you use is up to you. For the past seven years, I've used Audacity. It's got a lot going for it, and once you get a handle on how to use it, it can be very fast to work with. Truth be told, there's a lot of features that will feel like overkill, and if you want to record a podcast, 90% of the features will remain untouched, but I promise, the 10% that you do use you will use all the time, and they will become second nature to you (and as soon as they do, don't be surprised to find out that the number of "essential features" starts to creep up ;) ). Whatever you use, make sure that you can import the file types you save. Yes, I'm using a Mac, so I'm using Mac vernacular here. Once I've imported the Skype call as a .mov file, I am presented with a stereo track that has two channels. The first is the channel with just my dialog or the dialog of the person that recorded the call. The second channel is the dialog of everyone else on the call. If you want to go very clean and are willing to spend the time to do it, you can have everyone record their own version of the call, send you their .mov files, and you can import them all together.



GEEK TRICK: If you want to go this route, and you want to be a little cheap on the sync-up options, record a "beep series" at the start of the recording, and occasionally, during silences, insert the beep series again, as a standalone audio spot. Also, run the beep series at the end of the recording. Why? This series of beeps will be a visual cue when you import the audio files to help you line everything up. One you do that, split all of the stereo tracks into mono, then mute the secondary tracks that have multiple voices together. If you do this, you will get multiple first person recordings, made local to their systems, and the odds of removing drop outs and clicks/audio artifacts are much higher.

If you can't get a native recording from everyone, or if you are the only one able to do the recording (common if you ask a guest to be on the show; it's rude to ask them to shell out for software they may never use again), then you can import a single .mov file, split it into two mono tracks, and sync-lock the tracks together. This way you can select sections in each track to mute, and if you cut out sections of audio from one track, you will also be cutting the same time period in the other track.

GEEK TRICK: One of the easiest ways to do a fast condensing of a recording is to use the "Truncate Silence" feature. You enter in a minimal time limit that you want to search for (usually anything longer than a second), and you truncate all of that down to individual silences of one-half second. Why? This is a typical time space in average conversations. By reducing all silences to anything less than a second, you can take out several minutes from a podcast recording easily, with very little chance of cutting off something important. Along with this, if you listen to tracks and decide you want to cut sections, I suggest selecting the section and muting it first (meaning silence everything in that selected space, which on the Mac is Command-L, and Ctrl-L on the PC). By doing this, you can do a rough cut of your show, chop out the bits you know you're not going to use, then run Truncate Silence to squeeze everything together.

Another consideration; this is important when you decide to publish... do you want to have show notes? Do you want to have a Transcript? Do you want to have both? The Testing Show provides both show notes and a full transcript of the audio. I have tried a number of methods for this, and truth be told, I keep returning to this approach. Fire up whatever word processor you want to use in one window, and your editor in another. As you do your fine editing, type out your transcript in time with the second by second audio scrubbing you are doing. If you are not providing a full transcript, this will be overkill. If you are not doing what I refer to as "grammatical audio editing" (that's removing the "um's", ah's", like's", "you know's", and other vocal tics we all use when we speak) then again, this is overkill. If, however, you decide to provide both a grammatical audio edit and a full transcript, you might as well do them both simultaneously. You may incur a 15-20% overhead by doing both at the same time, but that's not bad, really.

GEEK TRICK: If you come across a comment that you think might need referencing in the show notes, either highlight it or insert an end-note and type a reminder text about that entry. Later, when you review the end notes, fill in the actual reference values or URL's to resource links. If you are online and can do it right then and there, all the better. Seriously, do this, and you will have produced 75% of your show notes. As to the pithy commentary to add as your "selling paragraph", that's an everyday struggle, but you will get better the more you do it. Seriously, though, I think I spend more time trying to get the introductory paragraph together at times as compiling all of the reference materials, and yes, I frequently feel like I'm an idiot as I review the text I've written, but over time, you just learn to roll with it. Sometimes, it works. Sometimes, it falls flat. So it goes.

Finally, you may want to spice up your podcast with music, inserted comments (bumpers) and some intro and outro comments. You may decide to do this differently each time, or you may want to standardize what I call "audio beds" that include these. For The Testing Show, our intro and outro music is provided by my band, Ensign Red. If you can make your own music, that will provide you with the ability to customize it as you need for your intros and outros or other uses. If you are not musically inclined, there are numerous sources of Creative Commons free use music samples that you can download and use. If you'd like to take the plunge, start with CreativeCommons.org and see what sounds interesting to you. It's good protocol to give credit to the artists whose music you choose to use in your podcast. Mention them in your show notes and in the podcast itself. It encourages discovery of their music and thus, makes them more willing to keep creating Creative Commons content.

So there you have it, a whirlwind look at creating a podcast. Have I left a lot of stuff out? Most certainly. Would you like to know more? I'm happy to share what I know and how I do it, partly because I know the challenges of getting started and how large a mountain that is. Also, I have a slightly selfish ulterior motive. My hope is that, by sharing these posts, and giving people a peek into my world of producing these shows, someone might comment back and say "you know, you could do this sequence of steps so much more efficiently if you just [fill in the blank]." Hey, it happens with code review all the time, so why not with podcast production review as well :)? 

Oh, and Saron, if you do publish that blog post, please let me know. I'd love to read it.

Thursday, October 13, 2016

Start Making Sense with Sensemaking - a #TheTestingShow Follow-up

One of the primary reasons that my blog is not as frequently updated as in the past is that I have been putting time into producing The Testing Show. Granted, I could do a quick edit, post the audio and be done with it, but we as a team made the decision we wanted to aim for a show that would flow well, be on point, and also have a transcript of the conversation. At first, I farmed out the transcripts, but I realized that there was a fair amount of industry specific stuff we would talk about that I would have to go back and correct or update, and then I'd have to put together the show notes as well, with references and markers. realizing this, I decided it made sense to just focus on the transcript while I was making the audio edits, so I do that as well.

Translation: I spend a lot of time writing transcripts and that cuts into my blogging. My goal is to make sure that The Testing Show is as complete as possible, as on point as possible, and that the words you hear and read are coherent and stand up to repeated listenings. Also, since it's a primary activity that I do within our little testing community, I really need to do a better job highlighting it, so that's what this post is meant to do. Consider it a little shameless self-promotion with a couple of additional production insights, if you will ;).

Occasionally, I get a podcast that tests my abilities more than others, and this week's episode proved to be one of those. We try our best to get the best audio files we can, but sometimes, due to recording live at a conference, or trying to capture a Trans-Atlantic call, we have to deal with audio that is not crystal clear. Often we get background noise that can't be isolated, at least not effectively. We sometimes get audio that is varying levels between speakers (one is loud, the other is soft, and leveling means introducing line noise to compensate for the low volume of a speaker). This time, it was the fact that the audio stream would just drop out mid-sentence, and we'd either have to repeat several times, or we'd lose words at random places. Because of that, this is a more compact show than normal, and that was by necessity. It was also a challenge to put together a transcript; I had to listen several times to make sure I was hearing what I thought I was hearing, and frankly, in some spots, I may still have gotten it wrong.

With that, I want to say that this was an interesting re-framing of the testing challenge. Dave Snowden is a philosopher, writer, and principal creator of the Cynefin Framework. "Cynefin" is a Welsh word that means "haunt" or "abode". In other words, it's the idea that there are things that surround you all the time that can give you clues as to what's going on, but you won't notice it unless you "live in it". There's a lot more to the framework than that, and Dave Snowden talks quite a bit about what it is and how it's been applied to various disciplines. Anna Royzman also joined in on the call and discussed her involvement in using Cynefin, and what it might mean for software testers who want to use the framework and approach with their testing. A caveat. This is a framework that has been used in a variety of places, from applications in government to immigration to counter-intelligence and software development. Testing is a new frontier, so to speak, so much of this is still to be determined and very much in the "alpha" stage.  Anyway, if you'd like to know more, please go have a listen.

Thursday, September 1, 2016

This Week on #TheTestingShow: Real Work vs Bureaucratic Silliness

I have come to realize that I am terrible in the shameless self promotion department. I have been actively involved the past several months with producing and packaging "The Testing Show", but I've not been very good about talking about it here. Usually, I feel like I'm repeating myself, or I'm rehashing what I already talked about on the episode in question.

When Matt takes the show on the road, however, I do not have that same issue. This most recent episode of The Testing Show, titled "Real Work vs. Bureaucratic Silliness" was recorded at Agile 2016 in Atlanta. As such, Matt was the only regular panelist there, but he put together a group consisting of Emma Armstrong, Dan Ashby, Claire Moss and Tim Ottinger. For starters, this episode has the best title of any podcast we've yet produced for The Testing Show (wish I came up with it, but it's Tim's title). I think it's safe to say all of us have our share of stuff we like to do, or at least the work that is challenging and fulfilling enough that we look forward to doing it. We also all have our share of "busywork" that is inflicted on us for reasons we can't quite comprehend. Tim puts these two forces on a continuum with, you guessed it, Real Work on one side, and Bureaucratic Silliness on the other.

I think there's an additional distinction that needs to be included in here, and that's when to recognize that some task, behavior, or check point actually is an occupational necessity, and when it is truly Bureaucratic Silliness (BS). In my everyday world, let's just say this tends to grade on a curve. Most of the time, BS comes about because of a legitimate issue at one point in time. Case in point; some years ago, in spite of an extensive automation suite and a robust Continuous Integration and active deploy policy, we noticed some odd things passing through and showing up as bugs. many of these things were not areas we had anticipated, and some of them just plain required a human being to look them over. Thus, we developed a release burn down, which is fairly open ended, has a few comments of areas we should consider looking at, but doesn't go into huge details as to what to specifically do. The burn down is non negotiable, we do it each release, but what we cover is flexible. We realized some time back that it didn't make sense to check certain areas over and over again, especially if no disruption had occurred around those features. If we have automated tests that already exercise those areas, then that likewise makes it a candidate to not have to be eyeballed every time. If a feature touches a lot of areas, or we have done a wholesale update of key libraries, then yes, the burn down becomes more extensive and eyeballing certain areas is more important, but not every release every single time.

Usually, when BS creeps into the system, it's because of fear. Fear of being caught in a compromising situation again. Fear of being yelled at. Fear of messing something up. We cover our butts. It's natural. Once or twice, or until the area settles down or we figure out the parameters of possible issues, makes sense, but when we forget to revisit what we are doing, and certain tasks just become enshrined, then we miss out on truly interesting areas, while we go through BS to the point that we would rather pull our intestines out with a fork rather than have to spend one more minute doing this particular task (yes, that's a Weird Al Yankovic reference ;) ).

Anyway, were I part of the panel, that's part of what I might have brought up for discussion? If you want to hear what the group actually talked about, well, stop reading here and listen to The Testing Show :).

Monday, June 13, 2016

The Testing Show Is YOUR Show

Edited: Date changed, see below.

TL;DNR: We have a new podcast at The Testing Show. We need show ideas, questions, your involvement! We are having a Tweetup using hashtag #TestingShowChat on Wednesday, June 22, 2016 from 11:00 a.m. - noon Eastern USA timezone. Join us!!!

And now on to more of the show... so to speak.

For the past several months, a fairly large percentage of my time has been spent on an endeavor that I enjoy. That endeavor has been producing and editing "The Testing Show". Put simply, the Testing Show is a podcast hosted and paid for by Qualitest Group. A few years ago, Matt Heusser and I partnered on a podcast hosted by Software Test Professionals called "This Week in Software Testing" (TWiST). It ran for two and a half years, we produced over 130 episodes, and at the time, we felt like we had said all we really wanted to say, or could say. We took a hiatus from podcast production that lasted longer than either of us thought it would.

Towards the end of 2015, Matt and I, along with Justin Rohrman and Perze Ababa, discussed the idea of creating another software testing podcast, reminiscent of the old TWiST, but updated and focusing on newer topics and changes that have happened in the testing word since we stopped producing TWiST. The net result of those efforts is The Testing Show.

Twice a month, we bring you news on an issue happening in the world of testing, or something we think is interesting going on in the broader world and how software testing effects the stories we talk about. In previous episodes, we've covered software systems that erroneously released felons, automated trucking, the challenges of accurate Super Bowl predictions, and software updates causing a satellite to fall out of orbit and burn up in earth's atmosphere... hey, who said testing wasn't fun :)?

We've had a great group of guests come and join us on the show to talk about topics related to their areas of expertise, ranging from Test Management (both process and people), being a Trusted Resource, Testing in Scrum, visits to QA or the Highway and the Reinventing Testers workshop, Automation & Tooling, and Measurement & Metrics.

At the end of each episode, I ask listeners to send us their feedback about the shows they have heard, what they would like to hear, and topics we could cover. It's in this aspect that I want to reach out to our listeners, both current and potential, and see if we can do more.

In our minds, we discuss topics that we think would be interesting to our listeners. Comments on Twitter and other social media sites have been positive, and we thank you all for that. We'd like to see if we can do better, both in the way of helping make a show that you would love to listen to, and also make a show that you would love to share.

As producer of the show, I do my best to make each fortnightly episode the best quality it can be. We often talk for an hour plus, but we edit the show to be roughly thirty minutes, give or take a few. We create show notes with complete transcripts of all the conversation presented, as well as references to what we cover and jumping off points where our listeners can learn more. As we do this, we realize that, amongst ourselves and each other, that we have a limited view as to what's most important to you, our listeners. We can make guesses, but we don't know everything you know, we haven't walked everywhere you have walked, and you have ideas and questions we may never think to ask. We'd like to engage with you directly, and get some opportunities to know what you would like to hear on the show, who you would like to hear from... and if one of the people you would like to hear from happens to be "you", hey, we can talk :).

On Wednesday, June 22, 2016 from 11:00 a.m. - noon Eastern, the cast of The Testing Show will be hosting a live event on Twitter. We will answer questions about the show, discuss things we've talked about in previous shows, but more to the point, we want to know things that you would like to hear us discuss in future podcasts. If that interests you, and you'd like to give us some feedback on what we are talking about, or what we could be talking about, we hope you'll join us. Use the hashtag #TestingShowChat to play along with us. If you can't participate at that time, you can always send email to us at thetestingshow@qualitestgroup.com, and we will do follow up in future shows.

Another way that you can help us get the word out is to leave reviews of The Testing Show on iTunes. If you like what you hear, leave us a review. More reviews means better placement in search engines, and more chances of people discovering the show. Share the show with your friends and encourage them to listen as well. We aim to make The Testing Show the best podcast we possibly can, but ultimately, the show belongs to you, our listeners, and our success resides with you. We'd love if you would join us, and help us make your show the best we can make it.

Thursday, March 17, 2016

Introducing... The Testing Show!!!

For the past few months, along with a bit of back and forth, negotiations, discussions, conversation, and a few trial runs, I am back in the podcasting saddle once again!

Qualitest Group has agreed to do a run of podcasts with Matt Heusser, Justin Rohrman and me as regualr attendees, along with Brian Van Stone from Qualitest and a revolving cast of what we hope will be many people in the software testing world to interact with us and talk testing topics, news of the day, and have a slight propensity for silliness at times.

We have posted the first three episodes at "The Testing Show" page. The first three episodes cover "Skill on the Test Team", "Testing as a Trusted Advisor" and "Testing When You Don't Have Enough Testers". We have a couple more episodes that will be posted within the next week, and then we aim to keep up to date with a new post every two weeks.

So what do we want you all to do? Well, we want you to listen. We want you to comment. We want you to suggest topics for us to talk about. Most of all, if you like what you hear, we want to have you share it with others. The more people listen, like, and respond to the podcast, the more likely that Qualitest will book us to do more shows in the future. We should also mention that while Qualitest is sponsoring the show, The Testing Show is not being used to market Qualitest, at least not directly. Of course they want to encourage people to consider them, but Qualitest does not tell us what to say or tell us what topics we cover. We do that ourselves.

Anyway, it's been a long time. Come join us, and if you genuinely enjoy what you hear, considering getting in on the action with us. We are always interested in topics and guest speakers, so if you want to participate, let us know :)!