Showing posts with label collaboration. Show all posts
Showing posts with label collaboration. Show all posts

Thursday, September 1, 2016

This Week on #TheTestingShow: Real Work vs Bureaucratic Silliness

I have come to realize that I am terrible in the shameless self promotion department. I have been actively involved the past several months with producing and packaging "The Testing Show", but I've not been very good about talking about it here. Usually, I feel like I'm repeating myself, or I'm rehashing what I already talked about on the episode in question.

When Matt takes the show on the road, however, I do not have that same issue. This most recent episode of The Testing Show, titled "Real Work vs. Bureaucratic Silliness" was recorded at Agile 2016 in Atlanta. As such, Matt was the only regular panelist there, but he put together a group consisting of Emma Armstrong, Dan Ashby, Claire Moss and Tim Ottinger. For starters, this episode has the best title of any podcast we've yet produced for The Testing Show (wish I came up with it, but it's Tim's title). I think it's safe to say all of us have our share of stuff we like to do, or at least the work that is challenging and fulfilling enough that we look forward to doing it. We also all have our share of "busywork" that is inflicted on us for reasons we can't quite comprehend. Tim puts these two forces on a continuum with, you guessed it, Real Work on one side, and Bureaucratic Silliness on the other.

I think there's an additional distinction that needs to be included in here, and that's when to recognize that some task, behavior, or check point actually is an occupational necessity, and when it is truly Bureaucratic Silliness (BS). In my everyday world, let's just say this tends to grade on a curve. Most of the time, BS comes about because of a legitimate issue at one point in time. Case in point; some years ago, in spite of an extensive automation suite and a robust Continuous Integration and active deploy policy, we noticed some odd things passing through and showing up as bugs. many of these things were not areas we had anticipated, and some of them just plain required a human being to look them over. Thus, we developed a release burn down, which is fairly open ended, has a few comments of areas we should consider looking at, but doesn't go into huge details as to what to specifically do. The burn down is non negotiable, we do it each release, but what we cover is flexible. We realized some time back that it didn't make sense to check certain areas over and over again, especially if no disruption had occurred around those features. If we have automated tests that already exercise those areas, then that likewise makes it a candidate to not have to be eyeballed every time. If a feature touches a lot of areas, or we have done a wholesale update of key libraries, then yes, the burn down becomes more extensive and eyeballing certain areas is more important, but not every release every single time.

Usually, when BS creeps into the system, it's because of fear. Fear of being caught in a compromising situation again. Fear of being yelled at. Fear of messing something up. We cover our butts. It's natural. Once or twice, or until the area settles down or we figure out the parameters of possible issues, makes sense, but when we forget to revisit what we are doing, and certain tasks just become enshrined, then we miss out on truly interesting areas, while we go through BS to the point that we would rather pull our intestines out with a fork rather than have to spend one more minute doing this particular task (yes, that's a Weird Al Yankovic reference ;) ).

Anyway, were I part of the panel, that's part of what I might have brought up for discussion? If you want to hear what the group actually talked about, well, stop reading here and listen to The Testing Show :).

Friday, July 29, 2016

Consider the Source: A Tester Invades Philmont

One of the more entertaining aspects of our trek would have to be the final two days, in that we were joined by an expedition mate that, let's just say, is a little different than the average hiker:


Testhead readers... meet "T.K."

Actually, his real name is "30", which is the number he wears around his neck for identification purposes,  but part of the "Burro Expedition" experience is that each crew gets a burro to join them for two days, and the crew gets to name the burro for the duration of the trek. Needless to say, it would be impossible to keep track of the names changing all the time, so the burros are officially known by their numbers, but are also conditioned to respond to any name they get called. 

During our trek, the boys talked a lot about the Anime series "Angel Beats" (which also happens to be one of my favorites), and we joked a lot about who our favorite character in the series was. That character was "T.K.", a long haired, always moving dancer who would speak in nonsensical English phrases (even in the original Japanese recording). When we met burro number 30, and he came up to us and looked at us with a circular head bob, we all laughed at the similarity to the movement T.K. did in the anime, and we all said "Hey, let's call him T.K.!" With that, T.K. became an official member of the crew, and would help us on the trek for the last twenty miles or so by carrying some of our gear for us.

As we were making our way to the Baldy Skyline camp area, we had a number of choices we could take to get there. As we scanned the area, we saw that there was a road that looked to be a straight shot up to the area, and pretty short by comparison. We started walking towards the camp and for the first bit of the stretch, T.K was walking alongside us and made no fuss. When we saw a fork in the road, and the trail that continued up to the camp area came into view, TK started walking that way, but we guided him back to the main road. Over the course of the next few minutes, TK walked slower and slower, and finally stopped. We tried to coax him forward, and we'd get a few steps in, and then he'd stop again. We did this about half a dozen times, and each time, it was getting to be progressively harder to move TK further up the road.  

Additionally, as we looked at T.K., we had the impression he was looking at us as if to say "hey, you guys are all idiots! Did you see that trail that forked off the main road a while back? That's where we're supposed to go. Trust me, I've done this trek more than a few times, I know how to get there!" Still, we felt confident that our short cut would work and save us time, so we persevered. I've had experience leading horses and burros in the past, so I took it upon myself to hold his bridle and lead rope and coax him up the road, and ultimately, he let me lead him up the road.

The road we had chosen changed. It became steep. It was straight, sure, but it was also a laborious climb to get up to the ridge line where the camp was. It was tiring, it was in direct sunlight, and by the time we reached the top of the road and the ridge line, we realized that, while we were at the top of the hill, the actual camp area we needed to be at was still about two kilometers east. As we trudged to the camp area, we saw the trail converse with the upper road, and we also saw that the trail was coming out of a wooded area, with plenty of shade, the trail meandering through a range of switchbacks that, while longer in total distance, would have been much kinder to our knees and backs. As I looked at T.K., it felt as though he was saying "See, I told you you were an idiot! I could have led you here and saved you a lot of hassle, but no, you had to go your own way. How did that work out for you?"

Does this sound familiar? I know it does to me. I've had this experience with my teams. We are so sure that what we are going to do will save us time, it will be different than all the previous times we've tried to meet a goal, but someone, usually one with a lot of experience, puts up resistance and tries to point out what we should be doing. Often, that advice goes unheeded, only to come back and bite us later when we realize that the insight was valid, accurate, and based in experience, but our hopes and optimism for a faster way (or a looming deadline) blinds us to the hard earned experience. 

I chuckled as I thought about this the following day, when we left the Baldy Skyline camp to head down to the next area and followed the main trail. T.K. just started going, and he barely stopped. He knew where he was, and he knew where we needed to go. Sure, we consulted our maps to be sure, but otherwise, the trail was the most direct route, and he just plowed ahead until we chose to call a halt for a rest break. During the remainder of the trek, as we were debating ways we should go, I'd quietly nudge T.K. to move forward, and I'd see where he wanted to go. Typically, I'd encourage following his lead, and each time, it would prove to be the most direct and easy to navigate route. Sure, it would have been more adventurous to bushwhack, but I'm sure T.K. appreciated not having to climb over or around felled trees or other obstacles.

There are times when new vistas are open to you, and there is no trail to follow, and one way is as good as any other. At times, though, there are those who have gone before us, and instinctively know when we are going the wrong way. It's wise to step back and listen to those who express reservations. Who know how much sweat and toil they might save us if we listen to them ;).

Thursday, July 28, 2016

The Passion of the Champion: A Tester Invades Philmont

A couple of days ago, I shared the challenges that some of our youth leaders faced when they wanted to break camp and get to the next destination. Sometimes they had their act together. Many times they didn't. It would have been very easy for me to play the dictator and push and yell to get the team moving. I've done that before. It's very efficient, but it does nothing to develop actual leadership or confidence in abilities.

This was especially true the third day into our trek. Our Wilderness Gia wanted to do a special ceremony/presentation around the Leave No Trace principles. He'd planned out the whole thing, and was really looking forward to doing it. That third day, we had a late start, and we had to cover a lot of ground, much of which was unmarked and required us to climb and descend a ridge, cross a river, and make our way to that day's base camp. By the time we arrived, we were all bone tired, most of the day was gone, program areas were closing, and one of our boys was exhibiting signs of altitude sickness. Needless to say, I had to tell our Wilderness Gia that the ceremony he wanted to do wouldn't be possible today. Based on the performance of the group today, tomorrow wasn't looking too good either, but we'd figure something out.

The reaction I got back to those comments was both unexpected and slightly amusing. He fumed, looked at me with complete disbelief, frustration and shades of anger. He was really invested in having this ceremony take place. Seeing this, I looked over to my associate advisor, asked for a minute, and said "want to try an experiment?" My fellow advisor was game, and with that I returned back to our Gia and said:

"A sunset ceremony just isn't possible under the circumstances, but would you be willing to shoot for a sunrise ceremony? Look at the ridge line up there. It will take a little while to make the climb to the saddle, but we could do it in about an hour. Problem is, we are going up and over the ridge, so coming back for our stuff and doing it all again doesn't make sense. If you want the sunrise ceremony, you will need to make sure everyone is up, packed, and ready to leave camp at 5:00 a.m. That means waking up at 4:00 a.m. to make the departure time. Doable? Yes! Do I think you all will be able to? Surprise me :)!"

With that, I prepped for dinner, went to bed, and figured the sunrise ceremony would likely not happen. 4:00 a.m. rolls around, and what do I hear but the Gia up and about, waking his fellow trek mates, working with them to get out of their tents, those same tents broken down and packed, and hustling to get everyone ready to go. Each opportunity he came around, I asked if there was any cause I could lend my hands to (a phrase the crew got to know regularly; I didn't tell anyone what they should or shouldn't do, but if they asked me to do something, I was always ready and willing to do so). Needless to say, we got out of camp in record time, and we made the scramble up the ridge line to the saddle, found a great spot to set up, and he led us in an excellent presentation that included meditation, discussion and watching a beautiful clear sunrise come up over the Great Plains and peek through the Sangre de Cristo mountains. What's more, we had an easy downhill descent into the Valle Vidal, a flat walk to our re-provisioning spot, and the ability to arrive at our destination for the day at 10:00 a.m. We had plenty of time to set up camp, relax, make lunch, join in activities at the base came, and basically bask in the fact that today's mileage, while comparable to yesterday's, went faster and so much more comfortably.

What was the difference? One of the youth leaders was on fire. He was on a mission to make this ceremony happen. His drive helped carry everyone else along. He spelled out the importance, what it meant to him, and what he himself was willing to do to make it happen. As an adult leader, I do that all the time, but there's a considerable difference when a youth leader decides something is important enough to push the rest of the group along.

If there's an initiative you think is important, be a champion for it, but better yet, encourage others to champion that initiative alongside you. Our Gia wasn't successful just because he was persistent (don't get me wrong, that helped a lot ;) ). He was successful because he was able to get the other boys to share in his vision. He got them excited about doing what he had planned. That turned into action and results. More to the point, it gave them an excellent contrast between the rushed and harried pace of yesterday, and the nearly effortless and smooth feeling of today. I was able to use both mornings as models for what to do and what not to do. It would be great to say that they always made early morning camp breaks after that, but that would be untrue. We had our share of late days after this, too, but it made a stark contrast for them to see the difference in how they performed, and felt, comparing the late departure day to the early departure day. Over time, it became easy to help them share that narrative, and encourage them to get the early start on the day, but in those cases, someone else was able to champion a reason, and get others to rally around their flag.

Thursday, November 12, 2015

Early Morning Pub Chats - Live from Agile#TD

Day three. Lots of conferring, lots of fun, way too little sleep (LOL!).

It’s always a challenge going to these events because my body fights the local time. It’s frustrating to realize that it’s two AM local while your body still thinks it’s 5 pm at home, and regardless of what your body thinks time is, 6 a.m. local time will arrive and it doesn’t matter what your time back home is, you have to get up and go.

This is the third Lean Coffee of the conference, and will sadly be the last. I enjoy thee gatherings because we have a totally impromptu discussion on topics I might not have considered asking about, but I learn new things each time.

Target Coverage and How to Get There:

What do we do when we have a long term team that seems to work everything out of their heads? The challenge is when we are trying to get a baseline understanding of the coverage that is being applied. How much testing is actually being done? Do we know? can we figure that out? How do we implement an approach that lets us see what is being covered? There are a variety of Code Coverage tools out there, but that helps if the people doing the primary testing and the code optimization are the same people.

However, in the case of the person presenting their dilemma, those are not the same teams. They are automating acceptance tests as they are creating them, but their legacy system doesn’t have much in this regard. the trick is that coverage is often hard to quantify. If the idea is statement coverage, that’s easier to get a handle on since unit tests can give you statement coverage, but test coverage is a bit more nebulous, since there are so many potential permutations. Using an API testing approach can help to fill in the blanks here. Buy setting up tests that utilize the service layer, many of the business rules can be automated or exercised through the API rather than trying to use the UI for this. visualizing coverage (drawing diagrams of the workflows) can help to make the visibility of areas to test more clear.

Coaching Developers to Test Earlier:

With the idea that Agile encourages testing being an activity rather than a role, many developers are getting into the role of doing testing as well as writing code.

Kim mentioned that there has been a challenge in her organization with trying to help the developers get into the habit of testing. From my own experience, there needs to be buy in from the programmers to ensure that testing is happening with the programmers. There's a benefit to asking the programmers to share their unit tests with you, the tester. By asking about the unit tests, and saying that we need to see what is happening so we understand better what is being covered, it sends a clear message.

Also, in our organization, no story is considered done if there are no unit tests or integration tests written by the programmers, so that helps make sure that unit and integration testing are done. For many programmers, they consider that "enough", but there's a lot more we can encourage them to do to test as well as having us do explorations and cover other areas. It's really helpful to communicate that we are trying to not be a bottleneck, so having the group work together to cover testing needs and to spread out the testing effort among the whole team helps encourage that philosophy. Some programmers will be resistant, but if the cultural expectation is that everyone tests, then we can get everyone else involved easier.

Distributed  Testing and Team Integrity:

A challenge with teams that are distributed is that many people fall through the cracks and the sense of team unity disintegrates. When testing gets spread among a number of groups, and people are assigned to different groups for differing periods, there's a struggle to get everyone to share what they are doing, specifically domain knowledge. The idea of sharing ideas and developing a "community of practice" can help here.

Having a rotating book club, or getting the testers together to say "what are you all working on, and would you be willing to give a quick summary of something you are learning?" Lean Coffee is actually a great mechanism for this. It can be a challenge to have teams that report to different managers, so it may be helpful to get the managers on board with the community of practice concept. The key is that there needs to be  real knowledge exchange, not just another status meeting to have to sit through.

What's New in Agile Testing?:

The area of DevOps and Continuous Delivery is an interesting avenue that is becoming more prominent. Containerization is a hot topic, and the tools discussion are able to become a runaway train very quickly. there's always something new and shiny to get involved with, but the principles of testing in an Agile context are still pretty consistent. The real newness is the experiences that people are bringing to it. Agile itself is not exactly "new". It's fifteen years old now, which in technical terms is fairly ancient. For each organization, though Agile may be brand new to them, and there's lots of opportunities to go in and have a broad range of experiences. Tools come and go, and what's the new hotness one year may be old hat the next, but each organization gets the ability to try those things as they mature and determine what matters to them.

Sad to realize I won't be doing this tomorrow morning, but fun to realize I have lots to ponder and consider when I get back together with my team. Thanks for following along, day three proper is about to get underway.

Monday, November 9, 2015

Lean Café, a Half a World Away - Live Blog From #AgileTD

The past few days have been extraordinary, to say the least. Starting with a flight from San Francisco to Instanbul via Turkish Airlines, I had a too brief layover in Istanbul (one of the few times I've genuinely wished that I had more time between flights so I could get out of the airport and explore the city). The stopover in Istanbul was also noteworthy in that this is the farthest east I have ever traveled from where I have lived most of my life (i.e. California). Sure, it was just a stopover to get another flight to Berlin, but I set my feet down there, so it counts ;).



The flight to Berlin was enjoyable, and for the record, Turkish Airlines offers wonderful service, even to the economy passengers like myself. If you get a chance to fly with them, I encourage it. I landed in Berlin about 10:00 p.m. local time, and most of the services in the airport were closed. I was fortunate to have the information desk open, and when I asked them how I could get to the the Dorint-Sanssouci resort in Potsdam, she returned with a printout that had a bus route to a train, and then a train to a tram line, and then a walk to the destination. Part of me was dismayed (seriously, are there no cabs at this hour?), but the tester in me also smiled and said "OK, challenge accepted!" It made for a much more fun experience, and I had a change to see a lot more of the city this way, as well as the approach out to Potsdam. I will confess, the tram dropped me off at Park Sanssouci, and as I walked off at just before midnight, not a soul was around. No cars, no streetlights, no house lights, and I really wondered "where the heck am I?" Fortunately, a brief walk to the west, and I saw the destination. Made it!

Monday was a day to adjust and get used to being nine hours ahead, and also to get a chance to see the sights in and around Potsdam (courtesy my friend Mieke Mertsche, who showed me around and introduced me to several of the local details, including Red Bull Cola, and its characteristic pine tree flavor (seriously, it's in there :) ). Additionally, the speakers dinner took place in the central area of town, with good dinner, great conversation, and a chance to catch up with many friends both personal and virtual (it's still a little strange each time I have someone walk up to me and say hi, and then realizing they know me through my blog or via Twitter. Awesome, but still strange :) ).


Today being Tuesday, the Agile Testing Days conference has gotten underway proper, and first order of business is Lean Coffee (or Lean Café as the sign says). For those not familiar with Lean Coffee, the idea is that everyone writes down topics that they would like to talk about, then everyone votes on the topics they want to discuss, and a queue is made based on the votes. Each topic gets seven minutes to start, and then based on the energy in the group, more time is given to those topics until the energy runs out.

A topic I put into play received the top votes, so I presented the topic of "Testing and 'that other stuff'". In my daily reality, I test, but I do other things, specifically related to being a release manager. At times, it seems that I get more interest and attention towards my efforts to improve the release management process than i do as a software tester. Part of that, I think, stems from the fact that release management is an infrastructure and delivery problem, and it has a very concrete problem. either the builds are deployable, or they aren't. Testing has a lot of fuzz around it, i nthe sense that there is a lot of work done, but there's also a lot of judgment that can be applied to what is to be taken care of and what can be deferred. IT's all important, but in smaller teams, I don't have the luxury of being "just a tester", I need to be able to adapt to additional roles and needs. Frankly, I'm totally cool with that, and I've appreciated the opportunities to get involved in other areas. Still, at times, I've wanted to see what could be done to help encourage and improve the visibility of the testing work I and my team are doing. One suggestion to increase the visibility of testing is to have specific stories that showcase the testing effort, as well as the time it takes to do the testing. Fact is, people respond to the immediate needs, and if your role affects immediate needs, chances are people will pay closer attention to what you are doing when it directly affects what your team needs to accomplish their goals.

The next topic was "how do we incorporate more UX and UI design options into our work". For me, the biggest way that we can help with this is to get involved early in the design and discussion process. We use a three amigos approach to story development, and in that way, design, programming and testing get together to discuss the options available, and how we can put testability into the design, and to encourage UX from the beginning, rather than having to improve it later. One way that I have started to think about this especially (and a foreshadow of my talk tomorrow) is to focus on ideas of "inclusive design" where possible, and to encourage the idea of making the product as usable as possible by the largest volume of people. That process can really help drive UX discussion.

"How to do BDD 'Right'" was the next topic, and there have been a variety of experiences with trying to use it in organizations. Cucumber is a popular framework for this, so it's not uncommon to have people think of Cucumber as BDD, but it's just one way of doing it. The real focus behind BDD is the fact that Behavior is the primary focus, and the testing is based around the desired vs. actual behavior of an application. BDD affects UX and UI interactions, as well as other "ilities" of a product. Ideally, making tests that focus on individual behavior characteristics are going to be the most effective. Trying to cram multiple behaviors into a single test will make tests more complex and more difficult to maintain as time goes on. By keeping the behavior characteristics as simple as possible, and as atomic as possible, then tests are more specific, and ultimately easier to maintain.

"Starting a new team" focused on the issues of creating a new QA organization in a company. I've been in this situation a few times, and each time has been as unique as the teams hoping to implement the new approach. Testing is important, and it needs to happen at all levels of the product and process. Having a QA "role" may or may not be appropriate, but testing is going to happen, and absolutely needs to happen. Everyone needs to own quality, and everyone needs  toe be a part of the testing equation. Encouraging a culture of continuous testing takes communication and setting an expectation that everyone needs to be involved in testing. If there is a dedicate tester role, more important than just testing is being able to share important and actionable information. Learn what the team does to share information. Learn how the team wants to deal with issues and defects, and the tools and infrastructure that is in place. Most important, testers need to learn the application they are working on and get as much experience with it as possible. It seems obvious, but that step alone really helps to get a feel for what is a nice to have and what is absolutely essential.

All told, a great start to the day. Looking forward to the next session. stay tuned for further posts (as I did at CAST, I will make a new post for each talk or topic). More to come, stay tuned :)!!!

Wednesday, August 5, 2015

Early Morning Musings - Live from #CAST2015

For those not familiar with the concept of Lean Coffee, a group gathers together (coffee optional), proposes a set of topics, organizes the topics to see if there are synergies, votes on the topics to set the order of discussion, and then goes into discussion of each item for an initial five minutes. If there is still energy for the topic after the first five minutes, we can vote to continue the discussion or stop it and move on to another topic.

Today's attendees are/were:

Perze Ababa, Carol Brands, James Fogarty, Albert Gareev, Dwayne Green, Matt Heusser, Allen, Johnson, Michael Larsen, Jeff MacBane, Justin Rohrman, Carl Shaulis

Topics that made the stack:

Testing Eduction, What's Missing?

Thoughts thrown out by the group: Accessibility, Test Tooling, Mobile and Embedded, Emerging Technologies, Social, Local, Geographically Tied Applications, Testing Sttrategy

Of all of these ideas, the testing strategy seemed to get the most traction. Everyone seems to think they know what it is, but it's a struggle to articulate it. Regulatory compliance could be a relevant area. What about shadowing a company(s) and see what they are doing with their new testers, especially those who are just starting out? What are their needs? What do they want to learn? What do those companies want to have them learn? Consider it an anthropology experiment. Action was to encourage the attendees to see which companies would be game to be part of the study (anonymized).

How Can We Grow Software Testers Through our Local Meetup Groups?

Getting topics of interest is always a challenge. How do we focus on areas that are interesting and relevant without being too much the same as what they deal with a work. Albert has been hosting a Resume Club at his Toronto Testers Meetup, and that's been a successful focus. Gaps in experience can help drive those discussions. Lean Coffee format itself can be used in meetups, and the topics discussed can help develop new topics that appear to be of interest to the community. Encourage games and social interaction, we don't necessarily have to focus on talks and discussion. ST Grant program can also be used for this. We offer grant funds for meetup support, but we also have the option of flying in people to meetups to help facilitate events as well (Michael did this in Calgary back in 2012 for the POST peer conference).  If monthly meetup is too difficult, commit to quarterly and work to recruit speakers in the in between time. If the critical mass gets large enough, schedule more meetings. Have a round robin discussion from a talk that's already been presented and recorded. Make workshops based on topics of interest.

Writing Code For Testers Via Web Based Apps

Matt discussed this around a web app that aims to help teach people to program (as part of a Book to Teach People How to Code in Java). In a way, this is a two part problem. On one hand, there's the interface to engage and inform the user to get involved and learn how to code. The second is the meta-elements that can determine if the suer has completed the objective and can suggest what to do.  Both require testing, but both have different emphases (the ability for an application to determine if "code is correct" can be challenging, and there is always TiM TOWTDI to consider (There Is More Than One Way To Do It).

Good discussions, good ideas to work with, and so many more possible things we didn't even get to consider.  Lock and learn for me is that it might be cool to run an anthropology experiment with other companies to see what they want to have their testers learn.

Time for breakfast, see you all in a bit :).









Tuesday, August 4, 2015

Leaping into Context - Live from #CAST2015

Erik Brickarp gets the nod for my last session today. After facilitating three talks, it feels nice to just sit and listen :).

Erik's talk focuses on "A leap towards Context-Driven Testing". When chaos and discord start raining down on our efforts, sometimes the best break through comes with a break with.  In 2012, he joined a new team at a big multinational telecom company. That team had a big, clunky, old school system for both documentation and loads of test cases (probably based on ISO-9001, and oh do I remember those days :p ). What's worse, the teeam was expected to keep using these approaches. To Erik's credit, he decided to see if he could find a way out of that agreement.

The team decided they needed to look at the product differently. Rather than just focus on features  and functions, he also decided to look at ways that the project could be tested. In the process of trying to consider what the test approach had to be, they moved from multiple spreadsheets to web pages that could allow collaboration. By using colors in tables (as they used previously in cells) they were able to quickly communicate information by color and by comment (reminds me of Dhanesekhar's tutorial yesterday ;)).

By stepping away from iron-clad rules and instead focusing on guidelines, they were able to make their testing process work more efficiently. Of course, with changes and modifications, this welcomes criticism. The criticism was not based on the actual work, but they were upset that the junior team member went behind the back of the organization to "change the rules". Fortunately, due to the fact that the work was solid and the information being provided was effective, informative and actionable, they let them continue. In the following weeks, they managed to make the test teams deliverables slimmer and more meaningful, faster to create and easier to maintain. By using a wiki, they were able to make the information searchable, reports listable, and easy to find.

Erik admits that the approach he used was unprofessional, but he was fortunate in the fact that the effort was effective. As a lesson learned, he said that he could have approached this with better communication and could have made these changes without going behind their backs. Nevertheless, they did, and so they have a much more fun story to tell. The takeaway here is that there is a lot of things we can do to improve our test process that don't specifically require corporate sanction. It also shows that we can indeed make changes that could be dramatic and not introduce a ton of risk. Support is important, and making sure the team supports your efforts can help testers (or any team) make transitions, whether they be dramatic or somewhat less so.

Additionally, if you have a hope to change from one paradigm to another, it helps a great deal to understand what you are changing to and how you communicate those changes. Additionally, make sure you keep track of what you are doing. Keeping track doesn't mean having to adopt a heavy system, but you do have to keep track. Exploratory testing doesn't mean "random and do anything". It means finding new things, purposefully looking for new areas, and making a map of what you find. When in doubt, take notes. After all that, make sure to take some time to reflect. Think about what is most important, what is less important, and what I should be doing next. Changing the world is important, and if you feel the need to do so, you might want to take a page from Erik's book. I'll leave it to you to decide if it makes sense to do it in full stealth mode or with your company's approval. the latter is more professional, but the former might be a lot more fun ;).

Monday, August 3, 2015

Why CAST? - Reflection from #CAST2015

The first Conference for the Association for Software Testing (CAST) was held in 2006. This year, we are holding the tenth CAST in Grand Rapids, Michigan. Three more years and we can claim to have raised a teenager :).

I discovered AST in 2010. By the time I had joined and learned what CAST was, I was unable to arrange to attend that year. I did, however, commit to attending CAST being held in Seattle in 2011. Part of that was made possible because James Bach specifically invited me to attend, to demonstrate a real world Weekend Testing event as a workshop. Additionally, I took the opportunity to offer a short talk as part of the "Emerging Topics” track, titled “Beyond Being Prepared: What Can Boy Scouts Teach Testers?”.

What I found most interesting about CAST, as compared to other conferences, was what appeared to me to be the complete lack of commercial involvement. I was tired of conferences and webinars where sessions were mostly about “here, buy this tool, and all your problems will be solved”. Instead, I was treated to real world situations, with speakers who are actual day to day, in the trenches software testers. The material was memorable, but more to the point, it was actionable. I could actually use what I learned. Since this first experience, I have participated in each CAST to date (Seattle in 2011, San Jose in 2012, Madison in 2013, New York in 2014 and now Grand Rapids in 2015).

Additionally, I appreciate an emphasis on having new speakers take part in CAST. Last year, I had the pleasure of presenting a talk with a brand new speaker, Harrison Lovell, about “Coyote Teaching”, which was about mentorship. This year, I had the chance to see many new speakers get selected, and I am pleased to say that AST working with Speak Easy helped many new speakers prepare and present at CAST. It’s this willingness and openness to new voices that, I believe, sets CAST apart from other conferences.

As the President of AST, I understand the effort it takes to present a conference, encourage people to attend, recruit speakers to present, and ultimately produce a program that is second to none. Today was our tutorial day, and from the conversations I've had so far, I firmly believe we are well on our way to making that a reality for this tenth CAST. For all of you who took the time to travel and carve out your schedule to be here, either to participate in the audiences or to deliver messages as speakers, workshop presenters, facilitators or volunteers, you have my gratitude. For those who were not able to attend in person, remember, you can still join us by watching webCAST with us.

Here's to the next few days, I think they will be marvelous :)!

Long Fun Cup, I Fill You Up, Let's Have a Party - Live from #CAST2015

Three blocks down one to go :).

This has been a productive and fun day, and I want to say thanks to Dhanasekar Subramanian for putting together an entertaining and informative session.

As we left the third block to fill up on Diet Mountain Dew and cookies (well, that's what I did, I really can't speak for the rest of the participants) we were looking at utilizing a mind map to sketch out the application and look at testing artifacts that we find. That's pretty cool in and of itself, but what about the next project? What could we do to consider and focus on a totally different app?

Truth is, we don't want to re-invent the wheel, but there are a number of key areas that we can ask "what if?" questions about. Instead of making a list of specific questions to make lots of specific mind maps, it can be helpful to have some common "rules of thumb" to draw upon. If you are reading that and want to yell "your honor, Testhead is leading the witness", well, yes, I am. For a lot of you, this is going to seem like a blinding flash of the obvious, but for those who are not familiar with the term, this is where heuristics come into play. Heuristics are wonderfully suited for mindmaps. Sekar in fact has written about, and uses in his tutorial, a good heuristic for testing mobile app coverage.

LONG FUN CUP

Below are the quick and dirty descriptions that Sekar uses to describe these terms. The "sins" are straight from his blog, and they get the point across, methinks ;):

Location: It’s a sin to test mobile app sitting at your desk, get out!
Orientation: It’s a sin to test mobile app sitting at your desk, lie in the couch.
Network: It’s a sin to test mobile app sitting at your desk, switch networks.
Gestures: In the mobile world, app responds to gestures, not clicks.

Function: Does the application fulfills core requirements?
User scenarios: How easy or how hard is it to complete a task using the app?
Notifications: How does the app let us know something needs our attention?


Communication: How does the app behave after interruptions by an incoming call or an SMS?
Updates: How does your device handle updating apps? What happens when we do?
Platform: Why does Apple and/or Android do certain things in a certain way?

What I like about taking a heuristic and turning it into a mind maps is the fact that you can communicate a particular testing strategy up front and very quickly. LONG FUN CUP contains a lot of potential testing horsepower if it is thoughtfully applied. What can also add to the ability to quickly communicate  information is using the labels, tags and other icons to help communicate information quickly. In this case a display of a mind map with the icons for each area can be a quick information radiator. the areas without icons can be seen as areas that still need to be addressed. areas with progress icons can show how much is done. Green check boxes can show that areas pass or are at least not seen to be having issues at this time. red X marks or exclamation alerts can point to potential problems, and text boxes can be filled in with more details or pointers to other documents that provide greater depth. What's more, with the right tools, doing these updates could be done on the mobile devices themselves, making for a nice virtuous cycle.

CASTing my Line into Mind Maps - Live from #CAST2015

Today the rubber meets the road. We are going live today with CAST 2015, breakfast has happened, the setup for all of the bags and badges has happened, the tables are manned, the food is being served, and the prep work for the webCAST is happening now (which I do not have any real involvement with other than to be grateful that it is happening and that Ben, Dee and Paul are making it happen).

Monday at CAST is tutorial day, and since the participants pay extra for the content, I have traditionally not done a play by play of these sessions, but I will talk a bit about why I chose to be in the one I am in and what my role is in being here. I'll also talk a bit of my own observations around the topic without repeating Sekar's presentation. Also, to be kind to my readers, I will split these posts up into different posts, since many have said it's hard to follow a big long post throughout the day.

First off, each of us who are on the board and attending a tutorial are doing so as the room helper or facilitator for the tutorial. One of the things I asked all of us on the board who wanted to participate to do was to work the back channels for the event. Since there was a limited number of seats for each tutorial, we wanted to make sure that the participants in the conference go the first chance to be there. We also made sure that each of us picked a different tutorial to be part of so that we would be able to evaluate the individual sessions and be able to report back from each of them as to their effectiveness, and things we could learn to help the next year's organizers with choosing and developing solid sessions.

There are four sessions being offered this year for tutorials:

Christin Wiedemann is leading a tutorial called "Follow your Nose Testing - Questioning Rules and Overturning Convention

Dhanasekar Subramanian is leading a tutorial called "Mobile App Coverage Using Mind Maps"

Robert Sabouring is leading a tutorial called "Testing Fundamentals for Experienced Testers"

Fiona Charles is leading a tutorial called "Speaking Truth to Power: Delivering Difficult Messages"

Since we all opted to spread ourselves around the tutorial choices to be the room helpers and facilitators, I chose to work with Sekar and be part of the "Mobile App Coverage Using Mind Maps" tutorial. One of the reasons I chose to focus on this tutorial was that I have seen a variety of mindmaps used by people over the years, and I tend to focus on a fairly simplistic use of them. For me, I tend to use a core concept, and branch a few ideas off the core, and then break them down to a few words in the branches. If I need more depth, rather than make complex mind maps, I will usually just create a new mind map with another concept. the idea of having multiple branches on the same map just feels messy to me, but at the same time, having to jump to multiple maps is also messy.

Sekar's tutorial is covering two concepts at the same time. The first is giving participants a chance to work with mind maps who may not have done so in the past. The second part is testing with mobile apps and categorizing the details of the app. the benefit to using Mind Maps in the process of testing is less the rigid use of the tool and more of idea that each core concept can have several points where we can branch off.

One of the fun things we do in these tutorials is get everyone on the same page and playing with the discussions. In this tutorial we are encouraged to break into groups and discuss a variety of topics. For mine I chose music, and what I find interesting with mind maps is not so much what we add to the mind map, but why we add them. My map for music was broken into Instruments, Songs, Genres and Modes. Why did I choose those? Possibly because I am a musician, and those are the things I think about. My guess is that a casual listener might not even know that modes exist, so they wouldn't include it as part of their breakdown.

It's time for our morning break, so I'm going to call and end to this post. I'll be back with another one in a bit. Thanks for joining me today :).


Saturday, August 1, 2015

Testing the IoT - Live from #TestRetreat

Ah yes, The Internet of Things! that weird cross section of devices and services that are super specific or are focused on areas that are still being defined. We hear about thermostats that learn, or devices that report blood sugar levels for people with diabetes.

The Internet of things are going to introduce all sorts of devices and needs for testing that come in all different shapes or sizes. Scott Allman brought in a bunch of interesting devices that are tiny and can be deployed for simple purposes. Imagine a device the size of a nine volt battery that can act as a full stack web server.

One of the fun aspects about these tiny devices is that they often have very primitive interfaces, or raw Linux style interfaces. This encourages people to play around with the devices and interact with the device. Additionally, with these tiny devices, it's possible to create multiple systems that can be deployed very cheaply, so we can set up little networks and sandboxes for very little cost.

Some of the challenges we face with these devices are in the realm of security, as well as the way that the system is powered (battery vs. dedicated power, using protocols like low energy bluetooth, etc.). there are also devices that use accelerometers, so part of testing includes subjecting the device to motion. Additional tests we might care about would be related to power, heat, durability, responsiveness, performance, usability, and accessibility. Many of the test approaches and themes that we use for mobile testing will be able to be applied to the Internet of Things as well.

What I find interesting is that these could be very low priced devices to implement and test with performance and security tools. I could see being able to create a small network of devices and implement jMeter and Kali Linux to poke around with. I joked that it would be fun to be able to have a belt where each section is a separate server,  and HSA and load balancing applications could be experimented with.

I noticed that in this talk, I kept thinking about using these devices for servers and applications I'm already familiar with. There's lots of applications I haven't even remotely considered, and I know that with time as I get more familiar with uses, I can start applying ideas for these devices and see what other avenues of exploration I can discover.

Teaching Testing to Programmers - Live from #TestRetreat

We hear a lot about teaching software testers how to program, but do we hear the converse? Why don't we ever hear about software testers teaching programmers how to test? Jesse Alford thinks this deserves to be a two way street and that there's a lot of opportunity to have engineers pick up exploratory testing skills to support the team in growing and learning.

One thing that should be mentioned is that Jessie works with Pivotal Labs, and Pivotal is a pretty awesome team with a lot of high functioning people on their team, and also have a fairly good number of engineers who are well versed in software testing methodologies.

There are a number of games that we use as testers that are well known. We use The Dice Game, we use The Pyramid Game, we use James Lyndsey's Black Box Puzzles, and we use simple games like "The Pen Test" that can be infuriating, but can teach us how we model problems and apply testing principles to those games.

A phrase that Scott Allman shared, and I like a great deal, is that  "our job as testers is to discover laws that have been smuggled into the model universe that the programmers didn't know about". Having the opportunity to share some of these exercises allows us to share this mindset with programmers in an area that is separate from their core experience.

One suggestion that Jessie encourages is to have two people be part of the game, especially when the game in question has a "find the rule" element to it (Dice Game, Zendo, Pen Test, etc.). One person runs the game, i.e. they know the rule and the aspect that makes the game relevant. Another participant is there to help coach the person playing the game, but they don't know what the game rule is. they are familiar with the game itself, but they don't specifically know what the particular rule for that game is. that way, the coach is also operating from a level of not knowing the specific answer, but still being able to coach the participant in how they might go about solving the problem. One point that Jesse made that makes the games themselves at times less effective is the "all knowing" game master. The coaching the game master provides turns into maddening hints, coming from the person who knows what the game is. By having a third person who doesn't know the rule, the feedback on trying to solve the problem is less biased and less of that "knowing glance".

Another way that I personally would encourage having a programmer get involved with testing is to have them participate in a few Weekend Testing sessions. These are great training grounds for session based testing, as well as to explore some interesting avenues for software testing via individual topics. The cool thing about doing these sessions is that a lot of them are focused on black box skills, and they allow everyone to discuss what they are working on and learning in the process.

It's fun to take the chance to learn from each other, and testing games are an easy way to get programmers to see what it is we do and how we do it.


Bringing Energy Back to Testing - Live from #TestRetreat

Picture the scenario. A tester has been in the game for a number of years. They know the details, they know the product, they've done countless stories, and at some point, the testing becomes rote and paint by numbers. The thrill is gone. the excitement level has left the building. Do you see yourself in this scenario? If not, fabulous, this talk is not for you ;).

If, however, you have been in this situation, or currently see yourself in this situation, then Phil McNeely's session on "Bringing Energy Back to Testing" is for you. I know how this feels. I've been there and done that. At a certain point, I lost the joy and fun that testing used to provide, and at some point, I was just going through the motions. It wasn't intentional on my part, I didn't intend to go on autopilot, but I did find that there have been times where I just did what I had to do, and often what I really didn't want to do. Often the reasons came down to doing the same thing over and over. Sometimes people just burn out. For many, their  passion is somewhere else, like snowboarding, or knitting, or writing a novel. Fort those people, it helps a lot to accept that their true passions lie elsewhere and to encourage them to invest in those areas to the point they feel they are getting satisfaction there, and the follow on effect is that they can focus on what they do at work.

In my case, I often found myself overcommitted to too many good things. It's not that I was necessarily burned out on my work, but that I was committed to my day job, and to writing, and to teaching, and to community engagement. In short, I found I was spreading myself too thin in too many areas. At those times, it became apparent that I need to "give myself a haircut" across the board. When a tester who was at one time productive seems to be less focused or engaged, it might be worth making a lunch date and just getting to see what is going on in that tester's life at that moment, both inside and outside of work. It's possible you might discover that they have recently become the PTA president at their school, or they have taken on an important but time consuming position at their church, or there may be a family situation due to an illness or situation in the family that is drawing upon their energy. I know when I have too many things happening in my life, every area suffers, and yes, that includes work. By realizing that energy commitments have changed, we can help that person (or ourselves) consider what options we have to make modifications.

I wrote about this a few days ago over at the ITKE Uncharted Waters blog, but something I am using to help me stave off this challenge is using an Objective Journal. By having me consider what I am working on, and questioning it on a regular basis, I can keep myself engaged with the problem, rather than waiting for something to complete and going of and working on something else that's not on target or, sometimes, not even remotely productive. The trick for me with the Objective Journal is that it allows me to see small, everyday wins. By seeing those wins, I stay motivated and excited.

Ultimately, regardless of how engaged or not engaged we are, at the end of the day we need to realize that WE are the ones that need to develop our motivation. We can encourage and offer to help others achieve motivation, but externally motivated people tend to not stay motivated for too long. Internally motivated people can stoke that fire indefinitely, so work to encourage that spark in others, but more importantly, help them develop and maintain that spark in themselves.

What is the Career Path for a Tester? - Live from #TestRetreat

Carl Shaulis asked a simple question, or what seemed to be a simple question... what are some of the skills needed at various points of a career for a software tester?

There are many variations of software testing and approaches to software testing, but for many of us, it seems that there is a specific path. The first round is what we as a group called "the muscle", i.e. classic manual software testing. The starting point for this is, to borrow from Jon Bach, a need for curiosity. Jon has said that he can teach people technical stuff as needed, but he can't teach people how to be curious (I will come back and get the quote and post where he says this, but for now, forgive the live blogger and lack of immediate attribution ;) ).

Another aspect we discussed is that the skill levels are not specifically better at higher job levels. Many of the skills of a Level 1 tester are still applied at higher levels. they don't necessarily have specialized knowledge, but they do have experience using it over several years. What is expected for any level is the ability to evaluate a product, to look at the product with an eye to look at a workflow and determine if something is out of place or not working in the way that people expect. Critical thinking skills are valuable at any level of the job. The differentiators tend to not be the skills, but the level of influence within the organization the individual has. Junior testers and senior testers are often differentiated not so much by their skill level, but with experience and leadership, as well as overall influence.

One obvious question to ask is "does a tester need to learn how to code and make coding part of their job if they want to advance?" My answer is that, if you want to be a toolsmith and work on test tooling, then yes, programming is essential. If you don't aim to be a toolsmith, or if you are not interested in focusing on automation or tooling, then programming may not be essential. Having said that, I think that many more questions are capable of being asked and evaluated when a tester can look at the underlying code and understand what is happening, even if just in a general sense.

Much of these discussions come into play so that they can have a shorthand when it comes to job titles, compensation, and ability to allocate people into an organization. I've primarily worked in smaller companies the past fifteen years, but during the first ten years of my career, I worked with Cisco Systems, which went from a smallish 300 person company when I joined it in 1991 to a 50,000 plus person company when I left in 2001. Early in the life of Cisco, job titles were less important than they were later. When dealing with a company that is much larger, titles and the shapes of the "cogs" starts to matter. Within smaller teams, generalists and people able to cover lots of different areas are much more important, and there's a fluidity in the work that we do. Career path is less relevant in a smaller company, and the reward aspects are different. In many ways, in a smaller company, you are not rewarded with titles or advancement, you are rewarded with influence (and in some cases, with money or equity).

As a closer, it was suggested that we check out the Kitchen Soap article "On Being a Senior Engineer".  There is a lot of meat in this one so I might do a follow-on post just on this article :). ETA: Coming back to this two and a half years later, I was asked if I would consider adding "7 Reasons You Can't Get a Junior Web Developer Job" as a follow-on read. Granted, a little outside of the software testing space, but many of the same issues also fit this discussion, so yes, considered and added :).

How Can We Interview Testers Better? - Live from #TestRetreat

One of the challenges that anyone who has been involved with hiring software testers can tell you is that it can be maddening to interview testers. We can find people, but getting the right people is often a struggle. We have all had the experience of reading a resume and seeing what looks to be very promising history and experience, only to have them in an interview and have to ferret out what they actually understand or do not understand. is there a way that we can do this better? Dwayne Green led a group of us to discuss how we approach these interviews and how we can improve the process.

Many of us have worked through resumes and had to make phone screens or initial interviews, and a common phrase that came up was to "audition" the candidate. There are several approaches to auditions that we can use. Some people will use a sample program and walk through how to test it. Some people like to use their own company's product as part of the audition, and to see how the candidate tests the product, or if they can find problems we already know about.

A discussion I recently had with our VP of Engineering at my company was the way that we expect people to work. Most of the programmers that write code have certain things open on their desk at all times: their IDE or programming environment, a browser with a google tab open and Stack Overflow or some other reference sites. the point being made is that, when we work, we tend to have these tools open to help us get to the things we need. When we interview, we deprive these candidates of that ability, and we expect them to "code" on a whiteboard. One of the ways that was suggested that we could change this would be to ask in advance what they like to use to work, and to feel free to bring that in with them. When we put the audition challenge in front of them, we also say "use the tools you would use". This has tended to give a more representative view as to how those programmers actually code. We discussed the idea that people should do the same thing when it comes to software testing. Let a candidate come in and use the tools they already understand.

One of the questions that we asked was "how could we make these interview approaches more real, but avoid a situation where we are making people "work for free" as part of the interview". There is always a danger that putting people into a simulation or scenario with the company's product runs the risk of replicating real work without compensation. One approach could be to use a virtual machine with a version of a release and a story that has already been worked, and have a list of issues that have already been discovered. Do we want to have them run an abbreviated test session and see what they discover from our list? Do we want to have them sit with our team members and pair for a particular period of time? In my view, I think we need to make clear that we are going over material that has already been covered, and that we are using it as an evaluation criteria and not as a way to get "free testing" out of someone.

Some people like to use games or use sample programs to do these experiments. The issue there is that some people are not good at games, or are not good at particular applications. Does that mean they are a bad tester? Probably not, but it does mean we need to take into consideration multiple avenues. They might not be good at one area, but be great in others. If we find that they don't do well in multiple areas, that might well give insights as to how well they will perform or not perform in our work environment.

We all discussed some of the worst experiences we had with interviews, and I shared that would often be asked some strange questions related to math problems or other specific questions that seemed to have no bearing on what we would be testing. Over time, I came to realize that these questions served one purpose: does this person think like I do? To be fair, it's been years since I've had to deal with those types of questions, but I know that some people still do, and if I can make one plea, it's to say "knock it off!" ;).


On Being The "Least Talented" Person in the Room - Live from #TestRetreat

Good morning from Grand Rapids, Michigan. I'm still functioning on West Coast time, but everyone else around me thinks it's the middle/late morning, so I need to kick in and get with the groove of TestRetreat. For those not familiar, TestRetreat has been an ongoing event the past few years hosted by Matt Heusser and is an un-conference event with a broad list of ideas and talks being presented by people who have interests in given areas.

A piece of housekeeping this time around. In the past, I have Live Blogged events in a day long format and have updated each page during the day. Due to the way that CAST is going to be run this year, and with the way that I will be more facilitating rather than just attending sessions, I won't be able to do the live blog the same way. Instead of doing a running log, I'll be making individual posts for each session or event. Downside here is that there will be multiple posts in a given day. Let me know which way you find most useful :).

Out first session is being hosted by Ajay Balamurugadas, and he's asking about the interesting dichotomy in that he at times feels like he is the least talented person in the room, yet he is the one that gets invited to speak at and participate in conferences. Why is that?

I've personally struggled with this attitude myself, and in part, I think the title is a little misleading, but the overall perception is an important one. By starting with the idea that we are the least talented person in the room, we are stating that we feel that we have a lot we can learn, and we are willing to put in the time to learn it. I would be hard pressed to consider Ajay the least talented person in any room, but I would say that that sense of humility and desire to learn as much as possible drives him and gives him the motivation to keep learning and growing. Adding to this is the fact that when you do create a reputation as a person who is actively engaged in the broader community, you can be held to a standard that is unfair to where you actually are. As I've talked to people over the years, I've likewise seen that they consider me to be a "thought leader" or "testing guru", and at times, I've considered their expectations to be unfounded. It's not that I have any super level of expertise, but the fact that I've written about or talked about the problem in any capacity tends to make people think I know or understand more than I do.

I'm not sure there is an easy way to deal with this other than to acknowledge that there is a reputation that being actively engaged gives someone. When we say "just because I write and talk about these topics, don't think that I am somehow superior to those who don't write or speak on topics" (fact is, some people who are mega performers do not engage in the broader community, and that's totally OK).  I personally talk to my team with the idea that I am allowed one completely stupid question of each member  of my team per day, and that they should accept it. To date, I've only had a couple of instances where people actually said "wow, you should know this". Many times, the answer I get is "no, that's not a dumb question. We don't really describe or define that area very well." the one thing I think is important is that the onus is on us to communicate both what we understand and what we don't understand. Also, it's important to get a gauge as to the expectations of what people want to see from us, and that we are meeting those expectations. If we are not, we should determine what their expectation actually are.

Tuesday, July 7, 2015

Learning Something "Sort Of" New

I want you all to meet a new friend:



This is an A-style flat back acoustic mandolin. It's an instrument that, in many ways, is similar to a violin. It has eight strings, but each set of four strings is doubled. The tuning is the same as a violin (the thickest and lowest pitched strings are tuned to G, then the next strings are tuned to D, then the next strings are tuned to A, and the thinnest, highest pitched strings are tuned to E).

I've played guitar off an on since I was a teenager, and during my years of playing guitar, I cemented a lot of bad habits, but also learned some cool things along the way. I never managed to get proficient enough at guitar to make it a primary instrument, but I did learn enough guitar to be able to write my own songs and show them to other people. One of my great laments is that I never really learned how to play other people's songs. While I can fundamentally read music (I'm quite good at reading the tenor line in vocal scores) it takes me time to read out chords that cover two staves, and it slows me down a great deal. Because of my impatience, I pretty much abandoned trying to read music to figure out already written songs, in favor of using what I already knew to write my own.

A few weeks ago, I had the chance to meet and make the acquaintance of a musician who enjoys performing at various events. We met while interacting with a "living history crew" for a pirate festival. As part of our interaction with the encampment, we got together and worked out a few songs, him on guitar and singing, and me singing accompaniment and hitting a hand drum. The experience was a lot of fun, and we both decided we should put some more songs together for future events. I likewise decided I wanted to accompany on those songs, but since we already had a guitar player, I said "OK, then I'm going to buy a mandolin and learn how to play it, just so we can do this!"

With the first requirement out of the way (that of issuing a "bold boast") I set my sights on finding an entry level mandolin. Frankly, there are many different mandolins out there, ranging from $35 to $3500 and more. I certainly wasn't going to spend $3500 on what might be a whim, but I wanted to make sure I had something usable, reasonably playable, and something that I wouldn't be heartbroken if it somehow got damaged (considering this is going to travel to events that will likely be held in parks, on seashores, and on boats in various places ;) ). To that end, I picked up a Rogue RM-100A Tobacco sunburst mandolin. It feels good enough to play without costing me a lot of money, and if I feel I develop beyond what it offers, I can always upgrade later.

One of the things I am fond of saying when it comes to playing an instrument is that you are better situated to learning a second, third, or tenth instrument if you have already developed some skill at a first one. Each instrument is analogous to another one, and the skills you learn with one can help you learn another. In this case, my guitar experience is both a help and a hindrance. It's a help in that the fretboard dynamics are very similar to a guitar. Fret spacing is different, but the underlying idea is similar. It's a hindrance in that the strings are tuned in fifths rather than fourths, like I am used to. That means all of the chord patterns I learned on guitar don't help me play a mandolin... at least not in the same way. I'm not sure if this is just something I never considered before, or I never had a reason to examine it, but the fifths tuning of the strings on a mandolin (G, D, A, E) is actually the reverse of the fourths interval tuning of the sixth, fifth, fourth and third strings of a guitar (E, A, D, G). That means, if I look at the mandolin's string pattern in reverse order, then the chord and scale patterns I know from guitar actually do work. They aren't perfect matches, but they have been a big help to my remembering where on the fretboard I am, and what I am doing.

Too often, I think we take an experience we had before, and we think "oh yeah, this will be simple" or "oh no, that's going to be too hard". We rarely think "hmmm, I've had experience in one area, I'm curious to see how much of it will apply and how much I will have to initially learn or relearn to become proficient with this (fill in the blank)". For me, I have some interesting goals and challenges that just may help me:

-- the mandolin is an interesting sounding instrument in its own right, and therefore it will be fun to learn how I might be able to play it.

-- I already have a goal to learn other people's songs; it's unlikely I will be called upon to write new nautical sea shanties, but hey, one never knows ;).

-- I have three children who also play string instruments (violin, viola and cello). I can sit in with them and we can look over songs together.

One additional thing I am doing now, which may become annoying to some, is that I'm bringing it with me wherever I go. The reason? I want to be able to practice whenever a spare moment arises. Perhaps not sounding out the instrument, but at least practicing chords or fingering technique, or reading music and trying to see if I can place my fingers faster on the fretboard so I can improve my sight reading. Who knows where this will lead, but if perchance you happen to see me, and I have the mandolin with me, go ahead and ask me to sing ""Get Up Jack, John Sit Down!" or any number of other sea shanties and pirate appropriate tunes. I look forward to the challenge of learning them and, hopefully, performing and singing them well sometime soon :).

Monday, June 15, 2015

Knowing When To Step Down

For the past four years I have had the joys & frustrations of working with an organization, as well as serving on its Board of Directors. That organization is the Association for Software Testing (AST). The positions I’ve served in for those four years include three years as the organization's treasurer, and this past year (so far) as its president. These years have been filled with successes and challenges, satisfying goals completed and frustrating loose ends still to be resolved.

In August, at the Conference for the Association for Software Testing (CAST), those candidates who wish to run, or who are up for re-election, will put their hats in the ring and make a case as to why they should be selected. Earlier this year, I anticipated I would be creating a post asking for your support. Instead, I am putting this post together to encourage others to run and get involved, as I will not be seeking a third term.

Why am I making this decision, and why am I talking about it now?

First, I want to give those who want to run for the board a chance to get their names out and be considered. Second, I want to discuss some of the things being involved with the board entails, and how you can be effective or hope to be effective. Third, I believe that becoming entrenched within an organization for too long can be a hindrance to moving forward, whether intentional or not.

Due to circumstances in both my work and personal life, and the time and attention needed in areas important to me (my family and my career), it is clear the time and attention I can provide to AST, in the role of a board member or executive officer, is no longer sufficient to be effective. To make the time to be effective, I will have to pull away from two critical areas. My kids are at a key point in transitioning from teenage years to adulthood. My work environment has changed due to the death of my director. I've stepped in to fill many of the roles he played. In short, the conditions that made it possible for me to be effective as a board member are not there now. To keep serving in this capacity would be a disservice to the organization. I want to make sure that the work I care about regarding AST can be accomplished. I still want to be part of that mission, but I have to be realistic as to what I can offer and do.

For the first three years of my involvement, I was the treasurer. That meant I had to make sure our financial house was in order. Making sure the money that came in and the money that went out was accounted for was my primary responsibility. Once you get a handle on it, you can do it reliably and have time to think about other things. During the years I was treasurer, we made great strides in breaking out where our money was going, and how to use that money effectively to help local and international initiatives. I still think the AST Grant Program is one of the best kept secrets of our organization. It’s there, but only a handful of people take advantage of it.

Three times a year, we gather together as an in-person group to discuss the business of AST. We have done our best to pick a central location to minimize traveling costs.  For the past four years, that has meant the U.S. midwest or east coast. One of those tri-annual board meetings also coincides with CAST. Anyone who runs will need to be cool with being able to travel for those meetings.

Getting seven people to agree to a decision can be daunting. While we can reach consensus on a number of areas, sometimes we just don’t have the bandwidth or the agreement to put those items into motion. We have been criticized for moving too slowly. The fact is, in some areas, we do move slowly. We are aware that we represent a large and diverse membership. No decision we make will please everyone. Still, we try our best to make choices and develop positions that will benefit the entire organization, rather than be of benefit to only a small number of members. Additionally, if we must make a choice, we will choose not do something if the alternative is to do something poorly.

Once a month, we get together for a monthly conference call to discuss business that needs to be moved forward, and making the time to have that call happen each month is important. Outside of these calls, and triannual in-person meetings, the work of the organization needs to get done and moved forward. Often, real life interferes with that happening.

If you are interpreting my words here as saying “those who wish to run need to have both vision and bandwidth to make sure things get done”, you have interpreted correctly. If you are reading this and thinking I am dissuading others from getting involved, that is the opposite of my intention. I encourage those who do want to run for the board to do so, and do it loudly! While there have been stressful moments, it’s also been fun, and I’ve been really excited about what we have been able to do. I think CAST is one of the best software testing conferences out there. The vision of AST and the members of the board and its various committees make it possible. I think that BBST is a very valuable series of classes. I’ve enjoyed being an instructor these past several years. Even though I will not be on the board after November, 2015, my involvement with BBST will continue. I intend to keep teaching, and aiming to help improve the process and delivery of that teaching.

My recommendation for those interested in running would be to look at something AST does, and demonstrate how you can help sustain and/or improve what we are doing. If AST is not doing something you think we should be doing, make a case as to why you feel you can make that possible, and how you can help make that happen. In the past, those who've been elected had a goal they wanted to see achieved, and they had the energy to see it through. If this fits you, I wholeheartedly encourage you to see our Election page, and make a bid to run for AST's Board of Directors.

I want to thank the AST membership for four memorable years. Thank you for giving me the opportunity to serve in this capacity. I’m leaving the board, but I am not leaving AST, nor will I stop focusing on initiatives I feel are important. I must adjust to current realities, and serving on the board is a commitment of time, talent and energy. There’s a great group of people already there, and we will need great talent going forward. You could be one of those people.

Friday, June 5, 2015

The Value of Mise en Place

I have to give credit to this idea to a number of sources, as they have all come together in the past few days and weeks to stand as a reminder of something that I think we all do, but don't realize it, and actually utilizing the power of this idea can be profound.

First off, what in the world is "mise en place"? It's a term that comes rom the culinary world. Mise en place is French for "putting in place", or to set up for work. Professional chef's use this approach to organize the ingredients they will use during a regular workday or shift. I have a friend who has trained many years and has turned into an amazing chef, and I've witnessed him doing this. He's a whirlwind of motion, but that motion is very close quartered. You might think that he is chaotic or frantic, but if you really pay attention, his movements are actually quite sparse, and all that he needs is right where he needs them, when he needs them. I asked him if this was something that came naturally to him, and he said "not on your life! It's taken me years to get this down, but because I do it every day, and because I do my best to stay in it every day, it helps me tremendously."

The second example of mise en place I witness on a regular basis is with my daughter and her art skills. She has spent the better part of the past four years dedicating several hours each day drawing, often late into the evening. She has a sprawling setup that, again, looks chaotic and messy on the surface. If you were to sit down with her, though, and see what she actually does, she gathers the tools she needs, and from the time she puts herself into "go" mode, up to the point where she either completes her project or chooses to take a break, it seems as though she barely moves. She's gotten her system down so well that I honestly could not, from her body language, tell you what she is doing. I've told her I'd really love to record her at 10x speed just to see if I can comprehend how she puts together her work. For her, it's automatic, but it's automatic because she has spent close to half a decade polishing her skills.

Lately, I've been practicing the art of Native American beading, specifically items that use gourd stitch (a method of wrapping cylindrical items with beads and a net of thread passing through them). This is one of those processes that, try as hard as I might, I can't cram or speed up the process. Not without putting in time and practice. Experienced bead workers are much faster than I am, but that's OK. The process teaches me patience. It's "medicine" in the Native American tradition, that of a rhythmic task done over and over, in some cases tens of thousands of times for a large enough item. Through this process , I too am discovering how to set up my environment to allow me a minimum of movement, an efficiency of motion, and the option to let my mind wander and think. In the process, I wring out fresh efficiencies, make new discoveries, and get that much better and faster each day I practice.

As a software tester, I know the value of practice, but sometimes I lose sight of the tools that I should have at my beck and call. While testing should be free and unencumbered, there is no question that there are a few tools that can be immensely valuable. As such, I've realized that I also have a small collection of mise en place items that I use regularly. What are they?

- My Test Heuristics Cheat Sheet Coffee Cup (just a glance and an idea can be formed)
- A mindmap of James Bach's Heuristic Test Strategy Model I made a few years ago
- A handful of rapid access browser tools (Firebug, FireEyes, WAVE, Color Contrast Analyzer)
- A nicely appointed command line environment (screen, tmux, vim extensions, etc.)
- The Pomodairo app (used to keep me in the zone for a set period of time, but I can control just how much)
- My graduated notes system (Stickies, Notes, Socialtext, Blog) that allows me to really see what items I learn will really stand the test of time.

I haven't included coding or testing tools, but if you catch me on a given day, those will include some kind of Selenium environment, either my companies or my own sandboxes to get used to using other bindings), JMeter, Metasploit, Kali Linux, and a few other items I'll play around with and, as time goes on, aim to add to my full time mise en place.

A suggestion that I've found very helpful is attributed to Avdi Grim (who may have borrowed it from someone else, but he's the one I heard say it). There comes a time when you realize that there is far too much out there to learn proficiently and effectively to be good at everything. By necessity, we have to pick and choose, and our actions set all that in motion. We get good at what we put our time into, and sifting through the goals that are nice, the goals that are important, and the goals that are essential is necessary work. Determining the tools that will help us get there is also necessary. It's better to be good at a handful of things we use often than to spend large amounts of time learning esoteric things we will use very rarely. Of course, growth comes from stretching into areas we don't know, but finding the core areas that are essential, and working hard to get good in those areas, whatever they may be, makes the journey much more pleasant, if not truly any easier.

The Early Bird Pricing for #CAST2015 Ends TODAY!



I know that there may be some last minute folks out there, and I want to assure you all that it would be well worth your time to sign up TODAY for the Early Bird Discount for  CAST 2015. After midnight tonight, the prices go up considerably.

I should also mention that our Monday Tutorials are selling out quickly. Rob Sabourin's tutorial is sold out, Christin Wiedemann's tutorial [Update: is now also sold out], and we have two more tutorials with seats that will likely go fast. If you want in, act now :)!

Now look, I know what a lot of you are thinking... "of course he's yelling about CAST 2015. Since he's AST's president, it's his event!" In a way, you are right, but that doesn't even come close to telling the whole story. We are offering what I think is an amazing program, full of cool tutorials (on Monday) as well as excellent Track Talks and Workshops on Tuesday and Wednesday. I'm proud of the lineup, but I had absolutely zero hand in the selection process. I deliberately stayed out of those discussions, and encouraged the Conference and Program Committee to put together the most awesome program possible. From what I can see, I think they succeeded.

We have also partnered with Speak Easy to encourage more first time speakers to speak at CAST. This is something I take great pride in seeing at CAST each year, the diversity of speakers, not just of physical attributes, but of experiences, skills and opportunities that you are not likely to hear at other conferences. Bold statement? Sure. Do I stand behind it? Absolutely.

But hey, why take my word for it? Why not take a look at our program and see for yourself. If you like what you see, you still have, by my reckoning, about 12 hours to still get in at the discount price. Don't miss what (if I have anything to say about it) will certainly be the best conference you will attend this year. Also, if this fires you up, will you help me spread the word? Please share this post with any and all who may benefit to see it.

Here's to early August and a great conference. Lock those savings in while you still can :).