Wednesday, October 1, 2014

An Alternative Approach to Teaching History?

Over the past several years, I've found Dan Carlin to be one of the most entertaining and thought provoking podcasters that I've listened to. For some, he is grating, irritating, and frustrating. He doesn't use the standard narrative. In fact, he steadfastly refuses to. Both of his podcasts, Common Sense and Hardcore History, strive to look at current events and history from what he calls a "martian" perspective. In many ways, I consider Dan to be the most "testerly" of podcasters. He strives to take views and commentary from all sides, consider the possibilities and the arguments made, and then presents them in a way that boils down to central themes and core ideas.

In his most recent podcast (Common Sense 281 – Controlling the Past), he makes a few points that I think would be immensely helpful in regards to not just K-12 education as it relates to history, but the entire way that we teach any subject. Perhaps it's the contrarian in me, or perhaps it reflects my own frustrations and misgivings with the way that school is taught, but I think we do several things wrong, and the net result is that many children develop a serious aversion to actual learning and discovering the joy of learning and education.

I am currently living this reality with my three children. I now have a college freshman, a high school sophomore, and an eight grader. I see what they are currently trying to do to get through their days in their various schools, and the adaptations each has to make. Ultimately, though all three are slightly different, they tend to suffer from the same problem. We operate our schools on the notion of facts and figures and dates and formulas that need to be memorized, need to be spit out on tests, evaluation is made, and then we move on to the next bit. Sometimes, this works well. Sometimes, it doesn't. As an adult who works in an ever changing landscape, I've had to embrace a different approach to learning. Also, as a software tester, I've had to often approach learning from a skeptical and often even cynical viewpoint. I'm not paid to say the product works. I'm paid to try to find out where it might be broken. My entire workaday life is the process of disproving and refutation... and I get paid for that ;).

Back to Dan and this podcast... one of the things that Dan highlights, especially in history, is that we tend to go through waves of revisionism. Fifty years ago, the Founding Fathers were near mythical deities. today, in many circles, they are seen as greedy despotic "white men" who built a society on a veneer of freedom at the cost of slavery and subjugation of others. every few years, there seems to be some tug of war about whether or not we should be exposing every one's sins, or instilling virtue through printing hagiography. Dan's thought, and one I share, is "why are we doing either?". In other words, if we truly want to teach history and what has come before, why are we necessarily giving one narrative more air time than others? What if, instead, we did something similar to what the news magazine "The Week" does? For those not familiar, The Week is a journal that presents many of its stories and headlines as a distillation of a variety of views from different sources. If a topic is going to be presented, it would take headlines and stories from both "liberal" and "conservative" pundits, publications and writers, and generally avoid making an editorial of its own, with the exception of actual editorials that it publishes, and clearly states as such that the writers are doing exactly that.

What is the purpose of this type of presentation? It actually allows for the reader to synthesize what they are reading, see the various viewpoints, the pros and the cons, and even the inherent biases of each side, and then leave it up to the reader to reason out what they are reading and what it actually means. It also helps give a more balanced view of the events and the key players. Rather than force a viewpoint based on an ideology, it allows the reader to process what they are seeing and apply their own litmus test to the material, and let them look for the coherence or the inconsistencies, something that testers are very well familiar with doing. Think about what history would look like if we allowed this same approach. We don't tell the story or George Washington or Geronimo or Martin Luther King from just one side. It isn't hagiography or character assassination. It isn't sanitized or prettied up to meet an agenda. It's given as is, with the idea that the reader discovers who the people actually are, and that they really are just that, they are people. Possibly extraordinary, possibly flawed, almost always misrepresented. Gather multiple views, present them as is, and then let the student actually practice some critical thinking skills, synthesize the data presented, and then (gasp!) actually give an opinion or discussion on what they've covered.

It's possible I may be completely insane proposing such a thing, but ultimately, I think the benefits would be huge. We talk a mean game about the importance of critical thinking. Wouldn't it be awesome to actually let students, I don't know... critically think?! Also, and I may just be speaking for myself here, but wouldn't this also make the idea of studying history (or any other subject) way more fun? As a tester, the ferreting out of the causes and effects, and advocating for the information discovered, is a huge part of the fun of testing. How great would it be to actually let students experience that in their everyday learning?

Again, it's a scary and bold proposition, but I'm just crazy enough to think teenage students are able to handle it, and might actually learn to enjoy these subjects in a way they've never really been able to before. What do you think? Realistic objective? Pie in the sky dream? If you had the chance to reshape how primary and secondary education were presented, what would you do?


Wednesday, September 10, 2014

Does Net Neutrality Matter to You? It Does to Me!

As I'm sure many of you already know, today is the 'Internet Slowdown".

Tens of thousands of websites are currently running the slow-loading icon to symbolize what the Internet would be like without Net Neutrality. TESTHEAD is part of that tens of thousands. I'm happy to join with sites like Netflix, Vimeo, Google, reddit, Kickstarter, Meetup, Upworthy and countless others who also taking part in this.

The FCC's comment period ends in five days! If you want to make your voice heard, please feel free to use the Internet Slowdown widget that loads when this page loads to do so.

In fact, don't just feel free to do it, I urge you to do it :)!!!

If you've already commented, you can also use the widget to call Congress.

Below is the statement from Team Internet, one I am happy to share:

We believe in the free and open Internet, 
with no arbitrary fees or slow lanes for sites that can't pay. 
Take a stand for "Title II reclassification," the only option 
that lets the FCC stop Team Cable from breaking 
the key principles of the Internet we love. 

Will you join us? 



Monday, September 8, 2014

When Things Just Aren't What They Seem

Truthfully, I hoped I would never have to write a piece like this. However, the last couple of weeks have made it so that I can't seem to write anything without this taking up the forefront of my mind.

Twenty seven years ago, I met a guy, a little older than I was, in the San Francisco Bay Area music scene. We were both relative newcomers, working on getting our first bands into the clubs. We had similar backgrounds, similar family lives, we were elder siblings with wild dreams, and we were both a bit on the gaudy and goofy side. I mean that in the absolute best way possible. We both had a penchant for garish clothing, big poofy hair, enjoyed being creative, and making things that we felt spoke to our individuality. We also both loved combing through the bargain bins at Tandy Leather Company, picking up whatever inexpensive notions we could find or afford, and making our own stage clothes with them. One night, I went out to promote for one of my band's early shows and ran into this guy, doing the same for his band. We looked at each other, pointed, laughed and said "dude, nice Tandy Leather bargain find"... and then laughed some more. We started talking. That's when I first got to know a wonderful man, and a friendship developed that would span several years and several bands, for both of us.

We became good friends, albeit in a "band rivalry" kind of way at first. We'd always high five each other for the shows each other was playing, but always in the back of our minds, we kept a tally:

"Hey, congrats on your first Friday night gig!" (we just nailed a Saturday slot).
"Dude, you're opening for Band X? Awesome!" (we just inked to play support for Band Y).
"Hey congratulations for headlining at the Omni!" (we're headlining at The Stone).
"Aww, dude, we're playing the same night? What a bummer!" (I'm totally stealing your crowd).
"Dude, you're gig drew 700 people? That's awesome!" (ours drew 800).

and on and on and on... and we would always laugh about it. We started as rivals, became friends, and in time, became like brothers.

We'd been there through each others ups and downs, the high points and the low points, the transitions that looked like bright futures and the valley where we felt we'd screwed everything up. In 1989, I really felt like I had missed everything, that I'd made a terrible mistake leaving a band that had a solid track record & history and venturing out on my own. It looked all so promising at first, but then, misfire after misfire, in the latter part of the year, I was band-less and alone, and felt like I might never perform again. He jumped on top of me at a club one night while I was feeling down, and in his ever sunny, ultra-cheerful way, gave me a pep talk, told me I just needed to have some faith, keep pushing, and not give up. He believed in me, and knew I'd find something much better than anything I'd been through thus far.

He was right. About two months after that particular pep talk, I agreed to go check out some friends from another band that I'd actually written off before, but decided to give them another look due to the fact that they had two new guys join the group, and they brought a whole new sound and energy to what they had been doing. With that, I decided I had to give them a listen. Subsequently, I joined High Wire, the band that would ultimately be the best group I'd ever perform with.

As things tend to happen, life wanders and meanders, and ultimately, my time with High Wire and with professional performance would come to an end. I'd get married, start a family and change careers, becoming a software tester. My friend would likewise move in different circles, relocate to Los Angeles, and have his own meandering river to follow. We'd reconnect several years later, first through mySpace, and then through Facebook. We'd touch base a time or two, here and there, we'd talk about old gigs, new challenges, where each other was in life, and how we'd gotten knocked down but always got up again. Through it all, my friend had that super sunny disposition, that "Shaka" sign we'd alway throw each other, telling each other to just "hang loose". We'd been buffeted, we'd tumbled and been knocked about, but we always knew we were all right and we'd deal.

On August 26, 2014, the biggest miss of my life took place. Me, the software tester, the one who says never to take anything truly at face value, woke up early to work on a project, and while reading Facebook, saw a post that shook me to my core. It was my friend, my long time, happy go lucky, super positive friend, saying goodbye. Goodbye to everyone. Goodbye to life. And with that, all went quiet. I couldn't believe what I was reading. I wanted to believe it was a joke. It had to be. Maybe it was some trigger and he'd come back and apologize, say he was sorry for getting carried away, and all would be well. We who were his friends went and checked with his family, other friends, those of us who could spent the next few days talking to each other, getting the word out, trying to do anything, by whatever means necessary, to find him. We donated money to Search and Rescue efforts. We asked for friends to share details about him with their friends. Most of all, we did not sleep, we did not eat, we stood at hair trigger attention. We had to know. Was he OK, or not? Was he with us, or not?

Sadly, on Friday, August 29, 2014, we discovered the news. He had been found, on a remote beach in Maui. He was gone. Our efforts to alert the media, to get Search and Rescue teams out to look for him, to get flyers all over the island of Maui and other islands in Hawaii, were what had primed the Park Ranger to be looking for him. When he was discovered, they were able to make an identification; it was indeed him. A bright candle in many of our lives had been put out.

The days since have been challenging. I have so many questions. My heart aches to realize that a friend who was always there for so many of us, to cheer us up in our darkest moments, reached out to none of us in his. We had no clue. Many of us would never in a million years have believed this would happen, certainly not to him. It just made no sense. Yet once we started looking back, over the years, to the challenges and issues he faced, and the ups and downs that went with them, we realized a deeper pattern, one we hadn't seen, or really, one we didn't want to see. That bright and sunny disposition was masking what were to him years of deep disappointment and unfulfilled dreams, of recurring problems and painful memories, to a point where he believed that it would be better to end it instead of moving forward.

I'm left here today to think "What could I have done? How could I have helped? Couldn't he call me?" and realizing that I was asking someone to make a step to a friend that was casual at best today. Of all people, why would he have called me? If so many much closer friends were not reached out to, why would he choose to reach out to me, an old band acquaintance from 20 years earlier? The fact is, my connection with him wasn't strong enough to be that kind of a lifeline... and that realization stings now.

For those who have kept reading through all of this, I thank you. It's not my usual topic, and I'm afraid I'm stumbling here, but if there is anything these past two weeks have taught me, it's to trust those weird feelings we sometimes get. It's to not believe the gloss and the sheen on the surface. I know this from software. The prettiest products often hide the most devastating bugs. I fundamentally know that about products I test, but putting that same reasoning to human interactions is much harder. Still, part of me now feels, after looking back at conversations and interactions, the clues were there. They seem obvious now, but I missed them then. In software, when we miss a catastrophic bug, we learn, we regroup, and we try again, to make sure it doesn't happen to another release. Sadly, today, this miss can't be called back, It can't be patched. We cannot release an upgrade.

My hope with this post today is that I can ask my friends, my readers, and anyone who comes across this post in the future to do me a favor. Never take those relationships you have for granted. Be they your friends, your family, your co-workers or even casual pals from a scene you may share, look to see if there is something that might be a warning sign. If you think you may sense something, and your gut reaction tells you things may not be all right, reach out to them. Talk to them. Let them unburden if they choose to. If they don't, let them know you are there for them anyway. Do everything you can when things just aren't what they seem. You may do more than just cheer someone up. You may save a life. My friend has shed the pain he was feeling. It is no more for him. For those of us who are left behind, we will always wonder "What could we have done?" The simple answer at this point, and the painful one, is "Nothing, but maybe, just maybe, we can be more alert to help see that it doesn't happen again".

Wednesday, August 20, 2014

A TESTHEAD Wayback Machine Find: ALM Forum Talk From April 2014

I am grateful for a variety of friends and acquaintances in the testing world who keep me alert to things they discover. What makes it even more fun is when I'm alerted to things I did and forgot about, or someone discovers something I didn't know was there. Today is one of those days.

Back in April, I gave a talk about "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed". I posted the slides on my LinkedIn profile and my SlideShare account, and then went on with my reality. It was pointed out that a video of my talk was recorded, and now that I know where it is, I can share it here :).

http://vimeopro.com/user27088109/almforum14test/video/95238828


I give a shout out to SummerQAmp, PerScholas, Weekend Testing, and Miagi-do as exemplars as to what we can do to help empower future testers with real skills. There are many others, to be sure, but these are the one's I'm actively engaged in, so hey, I'm biased :).

I hope you enjoy the talk, and if you do, please share the message with others.

Friday, August 15, 2014

Coyote Teaching: Watch How It All Came Together

Harrison Lovell and I decided to try an experiment.

What if a mentoring pair (a person relatively new to the software testing world and a longtime practitioner) were to work together and look at the way that mentoring is performed?

Could we learn something in the process?

What if we tried something novel, and looked at mentoring relationship all over the world, both current and ancient?

What would we find, and could we learn from them in a way that might prove to be useful to us today?


With that, we embarked on a several month voyage (mostly performed over Skype and email) and decided we'd give a try at a method that takes its cues from ancient cultures. Those methods are called "Coyote Teaching", and we opted to be the Coyotes :).

During CAST 2014, which was held this past week at the Helen and Martin Kimmel Center in New York City, we had the chance to present this topic and approach, and Huib Schoots, a friend of ours, was kind enough to record the whole talk. For those who would like to see it, it is here in its entirety:



I want to congratulate Harrison on his first conference talk, and to thank him for his enthusiasm, as well as his hospitality while showing me around mid-town Manhattan during the week (I should also mention that this was the first opportunity we have had to meet face to face).


I am also thankful to those who gave us valuable feedback to make the talk even better than we originally envisioned. My thanks especially to Alessandra Moreira for helping me go over the fine points of the talk and acting as the counter debater to help poke holes in the ideas we were going to present.

With that, please watch "Coyote Teaching: a New (Old?) Take on the Art of Mentorship". If you like what you see, please comment below and let us know what you liked. If you don't like what you see, please comment below and tell us that, too :). Either way, we'd love to hear what you think.

Wednesday, August 13, 2014

The Conferring Continues at #CAST2014


Hello everyone! It's wild to think that my week in New York will be ending tomorrow morning. I've had so many great experiences, conversations and interactions with so many great people. I've met both of my PerScholas mentees, and I've enjoyed watching them take in this experience. It's also been great fun talking to so many new friends, and I will genuinely miss this "gathering of the tribe", but let's not lament leaving when there is a whole day of interaction and conferring (not to mention my own talk ;) ) still to happen.

Lean Coffee this morning dealt with some interesting challenges in that the water main broke out in front of the Marlton. As such, the water for most of 8th Street was shut off. Water only got restored a little after 7:00 a.m., so the coffee part of Lean Coffee for the participants is just starting to come in.

One of the topics we started with was the transient nature of software testing, and why we see so few people who come into testing stay with testing. There are lots of reasons for this. In my career, many of the people I have met who were software testers have gone on to do other things. Some became programmers, some became project managers, some became system administrators, and some became managers. Of the people I knew who were testers for more than five years, most of them remained testers going forward. that's just my reaction, but I think that, by the time people have reached the five year mark as testers, they decided that they either liked testing, or they were good at testing. Of course, this may be an observation bias, since I am seeing people who followed my own path. It's an interesting topic to look at further.

The second topic was focusing on how software testers train and learn about their jobs, and how they find the time to do it. For many of the people, they make an effort to carve out time that will be relevant to them. For me personally, I like to use my time on the train when I commute to and from work to read, think, and ponder ideas. Others use materials like Coursera or uTest University. Some people enjoy writing blogs (like me ;) ). Key takeaway, everyone has a different way to learn.

The third topic focused on how people get through the large amount of material at a conference. How do they capture it all? there's lots of techniques. My favorite technique is writing blog posts in semi-real time (like this one). I find that, instead of writing a summary of what I am hearing, I try to write a summary and a personal take on the details I have learned, so that it is more personal and actionable. Others use sketch notes, doodles, mind maps, recordings of talks, etc. The key takeaway is not the method in which you capture, but that you make what you capture actionable.

The fourth topic is moving into consulting or contracting, and what it takes to make that work. Several of the participants shared their experiences as to how they made the transition, and the challenges they faced. In addition to doing the work, they had to work with finding gigs, collecting money, doing paperwork and tax filing, etc. At times, there will be unusual fits, and needs that may or may not be an ideal situation, but a benefit of being a consultant is that you are there for a specific purpose and for a specific time, and at the end of it, you can leave. There's a need to be able to deal with a high level of ambiguity, and that ambiguity tends to be a big hurdle for many. On the other side, there is a mindset of learning and continuous pivoting, where there's always something new to learn. the pay can be good, but it can be sporadic. Ultimately, at the end of the day, we all are consultants, even if we are employed by a company and we are getting a paycheck.

The final topic is "are numbers evil or maligned". Some people look at numbers as a horrible waste of time. Don't count test cases, don't count bugs, don't count story points. In some ways, this quantitative accounting is both aggravating, but somewhat necessary. Numbers provide information. Whether that information is bad or good depends a lot on what we want to measure. If we are dealing with things that are consistent (network throughput, megabytes of download, etc.) collecting those numbers matters. Numbers of test cases, number of stories, etc. are much less helpful because there is such a variation as to what and how we can control those values. When the measurement cannot be actionable, or it's really vague, then the numbers don't really make a difference. Numbers can be informative, or they can be noise. It's up to our organizations (and us) to figure out what is relevant and why.

Thanks to all the participants for being involved at this early hour :).

---

The morning keynote for today was delivered by Carol Strohecker of the Rhode Island School of Design, and the topic was "From STEM to STEAM: Advocacy to Curricula" and the focus was on the fact that, while we have been emphasizing STEM (science, Technology, Engineering and Mathematics), we miss a lot of important details when we do not include "Art" in that process.

STEAM emphasizes the importance of art and design and how it is important to innovation and economic growth. Tied into this is the maker movement, which likewise emphasizes not just the functional but the aesthetic. There's a lot of neat efforts and initiatives taking place at the academic level as well as the legislative level to get these initiatives into schools so that we can emphasize this balance.

There are many tools that we can use to help us look at the world in a more artistic and aesthetic approach. Art is nebulous and subjective. It does not have the same level of solid concrete syntax that science or language has (and language is pushing it). Much of the variance in the artistic leads to a development of a particular skill or attribute that we call "intuition". It's not concrete, it's not focused, it's not based on hard data, but it informs in ways that are just as valuable. Artistic endeavors help to develop these traits. Cutting the arts out of our economic vision puts us at a significant disadvantage.

[I had to duck out at this point to take care of some AST business, so I can only give you my take on the actual details I heard discussed. Sorry for the gap.]

One of the quotes shared that makes great sense to me comes from Immanuel Kant:

"The intellect can intuit nothing, the senses can think nothing. Only through their union can knowledge arise."

It feels like this talk is resonating with many people, as the Open Season for this talk is vigorous and active. An emphasis on synthesizing the inputs and the areas that we interact with makes for a richer and greater whole. It takes a different level of thinking to make it happen, but there are amazing opportunities for those willing to stretch into new disciplines. Many books have been suggested, including "Inventing Kindergarten", which talks about the importance of play and discovery to learning and skill acquisition/development.

James Bach made an interesting comment. With the emphasis on STEM, and now STEAM, who gets left behind now? Are there other areas that are now orphaned and unfunded, or is it that these areas  have been unfunded for so long that they cry out for help? Does it make sense to work with a small group, or do we need to consider that STEAM is meant to be an all encompassing discipline focus?


---


Up next is mine and Harrison's talk... for obvious reasons, I will not be live blogging then ;). I do however encourage everyone to tweet comments or even ask questions, and I'll be happy to follow up and answer them :).


---

During lunch we had the results of the test challenge and we also had the results of the Board of Directors election.

Returning members of the board:
- Markus Gärtner
- Keith Klain (re-elected)
- Michael Larsen
- Pete Walen

New Members:
- Erik Davis
- Alessandra Moreira
- Justin Rohrman

Congratulations to everyone, I think 2014-2015 will be an awesome year :).

Ben Simo took the stage after lunch to talk about the messy rollout of the healthcare.gov web site and all of the problems that he alone was able to find with the site. Ben made a blog to record the issues that he found, and that received a *lot* of attention from the media and from the government as well.

For the details of each of the areas that he explored, you can see the examples he posted on http://blog.isthereaproblemhere.com/. What I found interesting was the fact that as Ben tested and logged his discoveries, it showed just how messed up so many of the areas were, and how much of the efforts Ben applied helped discover some strange issues without even trying. Ben was not asked to speak to Congress or to testify, and he did not find that there was any government action from his efforts, but he became the target of DDOS attacks and media outlets were calling him very regularly.

Ben has on the side of the "is there a problem here" site a variety of test heuristics are listed, and he applied most of those heuristics to help uncover the bugs he found. Many of the issues discovered fit into the specific heuristics. Listed here they are:

CONSISTENCY HEURISTICS
from James Bach and Michael Bolton

H istory
I mage
C omparable Products
C laims
U ser Expectations
P roduct
P urpose
S tatutes
-F amiliar

FAILURE HEURISTICS
from Ben Simo

F unctional
A ppropriate
I mpact
L og
U ser Interface
R ecovery
E motions

SECURITY VULNERABILITIES
the OWASP Top 10

1. Injection
2. Broken authentication and session management
3. Cross-Site Scripting
4. Insecure Direct Object References
5. Security Misconfiguration
6. Sensitive Data Exposure
7. Missing Function Level Access Control
8. Cross-Site Request Forgery
9. Using Components with Known Vulnerabilities
10. Unvalidated Redirects and Forwards

What was also amazing to see was that all of these issues were discovered/reported using nothing more than this own data. He said he would not try to do anything to access other people's information, and he did not. even then, he still found plenty of issues that should have made the healtcare.gov team both very nervous, and very grateful.

---

Geoff Loken tickled my interest with the talk titled "The history of reason; arts, science, and testing". As one who finds philosophy and all of its iterations over the millenia fascinating, I've found the different ways that reason and intuition developed over the ages to be a worthwhile area to study and learn about. Ultimately, philosophy comes down to epistemology, and epistemology led to the scientific method (observation/conjecture, hypothesis, prediction, testing, analysis).

Observation cannot tell us everything. There are things we just can't see, hear, touch, smell, or taste. at those times, we have to use reasoning skills to go farther. The scientific method does not prove that things exist, but it can disprove to a point the nature of an items existence.

Geoff played a clip from Monty Python and the Holy Grail (the witch scene) which showed what could be considered a case of bad test design. Ironically, based on the laws and understanding of the world at the time, they performed tests that were actually in line with their standards of rigor. We simply have codified our understanding of more disciplines since their time.

One of the other aspects that comes into play when we are testing is that science can quantify, but it can't qualify. There's aspects to testing that we need to look at that go beyond the very definable and specific data aspects.

Overall, I found this to be a lot of fun to discuss, and it reminded me of many of the historical dilemmas that we have faced over the centuries. We look at what appears to be totally irrational actions in centuries past. It makes me wonder what from our current time will look irrational 500 years from now ;).

---

The last session I attended today was Justin Rohrman's talk "Looking to Social Science for Help With Metrics". Metrics is considered a dirty word in many places, and that disparaging attitude is not entirely unjustified. Metrics are not entirely useless, but the measurement of them in the right context is important. If they are not used in the correct context, they can be benign at best and downright counter-productive.

By focusing on the metrics that actually matter, we can look at measurements that can tell us how to learn about the systems we use and learn where we are in the life of the product. Some of the context-driven measurements that we could/should be looking at include:


  • work in progress
  • cycle time
  • lead tile
  • touch time
  • slack
  • takt time
  • time slicing
  • variation
  • Find -> Fix -> Retest loop

These measurements intrigue me, and they seem to be much more in line with what could actually help an organization. These fit well into what is referred to as the Lean Model.  Lean focuses on measurement for improvement. This is in contrast with using not to be confused with measurement for control.

I'm currently fortunate to work for a company that does not require me to chase down a number of useless metrics, though we have a few core metrics that we look at. These examples give me hope that we can get even more focused on measurement values that are meaningful. I'll definitely bring the context-driven list to my engineering team, or better yet, try to see if I can derive them and report them myself :).

---

Tim Coulter and Paul Holland wandered about the venue and checked out a bunch of the talks, and he decided to check out as many of the talks as he could to share some "TimBITS" and takeaways.  Some takeaways:

- Direct your testing keeping the business goals in mind
- Tighten up the feedback loop, it will make everyone happy
- Write your bug reports as if it were for a  memory-wiped feature you
- When you add a test specialist into a development team, everybody wins
- Skills atrophy: Testing skills must be used or you will lose them
- Social sciences all play a part in testing, it's not just technology
- Testers appear to be hardwired to play games
- Art has an impact on Software Testing
- Good mentoring is hard. Answer your mentees questions with more questions
- It's a sin to test mobile apps sitting at your desk
- To be a good tester you need to be a "sort of skilled hacker"
- Pair with people in all roles they will all give you different insights
- You can't prove quality with science, but you can prove facts that may alter judgement
- Try to show the need for what you want to teach before you try to teach it
- "Hey, these aren't just testers but real people!"

Many of the participants are sharing their own takeaways, and they are covering many of the ones already mentioned, but they are showing that many people are seeing that "there is an amazing community that looks out for one another and actively encourages each other to do their best work".

---

It's been fun, but all good things must come to an end. We are now at the closing keynote, given by Matt Heusser, and the title for this one is "Software Testing State of the Practice (And Art! And Science!)".

First thing that Matt says that he is seeing is that there is a swing back to programming, and back again. The debate of what testers should be doing, what kind of work they should be doing, and who should be doing it is still raging. Should testers be programmers? Some will embrace that, some will fight it, and some will find a place in the middle.

Automation and tech is increasing in our lives (there are supermarkets where self-checkout is becoming the norm). The problem with this prevalence is that we are losing the human touch and interaction. Another issue is that testing looks to be a transient career. Seven years from now, it's entirely likely that more than 50% of the people who are here attending their first CAST will not even be in test 7 years from now.

Another issue that we are seeing is the Fragmentation within Testing. What does test even mean? What is testing, and who has the final say on the actual definition? We have a wide variety of codebases and strategies of how we code and how we test. All of this leads to a discipline that many people don't really understand what it is that the test teams do. We are a check box that needs to be marked.

The Agile community in the early 2000s we in a similar situation, and they dared to suggest they had a better way to write software. That became the Agile movement, and that's changed much of the software development world. Scrum is now the default development environment for a large percentage of the software development population. Scrum calls for testing to be done by embedded members of the team, and not by a separate entity.

Matt refers to three words that he felt would change the world of software testing. Those words are:

- Honesty
- Capable
- Reach

We need to be honest with our dealings and we need to show and demonstrate integrity. We need to prove that we are capable and competent, and we need to reach out to those who don't understand what we do or why we do it.

I added a comment to this talk in open season when asked about why testers tend to move out of testing, and I think that there's something to be said that testers are broad generalists. We have to be. We need to look at the product from a wide variety of angles, and because of that, we have a broad skill set that allows us to pivot into different positions, either temporarily or as a new job. I personally have done stints as a network engineer, an application engineer, a customer support representative, and even a little tech writing and training, all the while having software testing as a majority of my job or as a peripheral component of it. I'm sure I'm not an isolated incident.

There's a lot to be said about the fact that we are a community that offers a lot to each other, and we are typically really good at that (giving back to others). As I said in a tweet reply, if we inspire you, please bring the message back to your friends or your team. Let them know we are here :).


Tuesday, August 12, 2014

Live from New York, It's... #CAST2014


Hey! Finally, I'm able to work that silly line a little more literally ;). Yesterday was the workshop day for CAST, but today is the first full general conference day, so I will be live blogging as much as I can of the proceedings, if WiFi will let me.

CAST 2014 is being held at the New York University Kimmel Center, which is just outside of the perimeter of Washington Square. the pre-game show for the conference (or one of them, in any event) is taking place at the Marlton Hotel in the lobby. Jason Coutu is leading a Lean Coffee event, in which all of the participants get together and vote on topics they want to talk about, and then discussion revolves around the topics that get the most votes.

---

This morning we started out with the topic of capacity planning and the attempt to manage and predict how to plan for capacity planning. Variance in teams can be dramatic (a three person test team is going to have lower capacity than a fifty or one hundred person team. One interesting factor is that estimation for stories is often wrong. Story points and stuff like that often regress to the mean. One attendee asked "what would happen if we just got rid of the points for stories altogether?" Instead of looking at points for stories, we should be looking at ways to get stories to the same size in general. It may be a gut feeling, it may be a time heuristics, but the effort may be better suited to just making the stories smaller, rather than get to involved in adding up points.

Another interesting measurement is "mean time to fix the build".  Another idea is to see which files get checked out and checked in most frequently to see where the largest amount of modifications are taking place and how often. Some organizations look to measure everything they can measure. One quip was "are they measuring how much time they are spending measuring?". While some measurements are red herrings, often there are valid areas that it makes sense to measure and learn what is needed to remedy. A general consensus is that the desire to get lots of finely granulated measurements is less effective than just targeting effort to fix issues and getting the release to be stable and fixing the issues as they happen.

---

Another topic is the challenge of what happens when a company has devalued or lost the exploratory tester skill set due to focusing on "technical testers". A debate came up to see what "technical tester" actually means, and in general, it was agreed that a technical tester is a programmer that writes or maintains automated testing suites, and that they meet the same level/bar that the software engineers meet. The question is, what is being lost by having this be the primary focus? Is it possible that we are missing a wonderful opportunity to work with individuals who are not necessarily technical, or that is not their primary focus. I consider myself a somewhat "technical tester", but I much prefer/enjoy working in an environment where I can do both technical and exploratory testing. A comment was raised that perhaps "technical tester" is limiting. Technically aware might be a better term, in that the need for technical skills is rising everywhere, not just in the testing space.

---

The last topic we covered was "testing kata" and this is a topic that is of great interest to me because of thoughts that we who are instructors in Miagi-do have been considering implementing. My personal desire is to see the development of a variety of kata that we can put together and use in a larger sphere. In martial arts (specificially, Aikido), there is the concept of "Randori", which is an open combat scenario where the participant has multiple chalengers, and needs to use the kata skills they have learned. The kata part, we have a lot of examples. The randori, that's an open area that is ripe for discussion. The question is, how to put it into practice? I'd *love* to hear other people's thoughts on that :).

---

After breakfast, we got things started with Keith Klain welcoming everyone to the event, and Rich Robinson explaining the facilitation process. As many know (and I guess a few don't ;) ) CAST uses a facilitation method that optimizes the way that the audience can participate. The K-cards that we give out let people determine how and where they can question and make sure everyone who wants a say can get their say.

James Bach is the first keynote, and he has opened up with the reality that talking about testing is very challenging. It's hard enough to talk to testing with other testers, but talking to people who are not testers? Fuhgeddaboudit! Well, no, not really, but it sure feels that way. Talking about testing is a challenging endeavor, and very often, there is a delayed reaction (Analytical Lag Time) where the testing you do one day comes together and gives you insights an hour or a day later. These are maddening experiences, but they are very powerful if we are aware of them and know how to harness them. The title of James' talk is "Testing is Not Test Cases (Toward a Performance Culture)". James started by looking at a product that would allow a writer working on a novel the change and modify "scene" order. The avenues that James was showing looked like classic exploratory techniques, but there is a natural ebb and flow to testing. The role of test cases is almost negligible. The thinking process is constantly evolving. In many ways, the process of testing is writing the test cases while you are testing. Most of the details of the test cases are of minor importance. The actual process of thinking about and pushing the application is hugely complex. An interesting point James makes is that there is not test for "it works". All I know for sure is that it has failed at this point in time.

The testing we perform can be informed by some basic ideas, some quick suggestions, and then following the threads that those suggestions give to us. Every act of testing (genuine testing) involves several layers. there's a narrative or explanation, there's an enactment of test ideas, there's the knowledge of the product that we have. there's the tester's role and integration in the team, there's the skill the tester brings to the table, and there's the tester's demeanor and temperament. All of these aspects come to play and help inform us as to what we can do when we test.

The act of testing is a performance. It can't truly be automated, or put into a sequence of steps that anyone can do. That's like expecting that we can get a sequence of steps so that anyone can step in and be Paul Stanley of KISS. We all can sing the lyrics or play the chords if we know them, but the whole package, the whole performance, cannot be duplicated, not that there aren't many tribute band performers that really try ;).

James shared the variety of processes that he uses to test. He shared the idea of a Lévy Flight, where we sample and cover a space very meticulously, then we jump up and "fly" to some other location and then do another meticulous search. the Lévy Flight heuristic is meant to represent the way that birds and insects scour areas, then fly off at what looks like a random manner, and then meticulously searching again for food, water, etc. From a distance, it seems random, but if we observe closely, we see that even the random fly around is no random at all, but instead it's a systematic method of exploration. Other areas James talks about are modeling from observations, factoring based on the product, experiment design and using tools that can support the test heuristic.

James created a "spec" based on his observations, but recognizes that his observations could be wrong, so he will look to share these options with a programmer to make sure that his observations match the intended reality. there is a certain amount of conjecture here. Because of that, precise speech is important. If we are vague, we can give leeway for others to say "yes, those are correct assumptions of the project". the more specific, the less likely that wiggle room will be there. Issues will be highlighted and easier to confirm as issues if we are consistent with the terms we use. The test cases are not all that interesting or important. The "testers" spec and questions we develop and present at the end is it. However, just as Paul Stanley singing "Got to Choose" at Cobo Hall in Michigan in 1975, the performance of the same song in Los angeles in 1977 will not sound exactly the same. Likewise, a testing round a week later may produce a totally different document,with similarities, but perhaps fundamental differences, too.

Does this all mean that test cases are completely irrelevant and useless? No, but we need to put them in the hierarchy they actually belong. There is a level of focus and ways that we want to interact with the system. Having a list of areas to look at so as to not forget where we want to go certainly helps. Walking through a city is immensely more helpful if we have a map, but it's not entirely essential. we can intuit from street names and address numbers, we can walk and explore, we can ask for directions from people we meet, etc. Will the map help us get directly where we want to go? Maybe. Will following the map show us everything we want to see? Perhaps not. Having a willingness to go in various directions because we've seen something interesting will tell us much more than following the map to the latter. So it is with test cases. They are a map. They are a suggested way to go. They are not *THE* way to go.

Ultimately, it comes down to the fact that testing is a performance. Fretting about test cases gets in the way of the performance. Back to watching KISS (or The Cure of The Weeknd if we want to be more inclusive), they have songs, they have lyrics, they have emotive passages, but at the end of the day, an instance of a performance can be encoded, but it represents only one instance of time. Every performance is different, every performance is on a continuum. You can capture a single performance, but not al of the performances that can be made. Cases can guide us, but if we want to perform optimally, we have to get beyond the test cases. We can capture the actual notes and words. There is no automation for "showmanship" ;).

This comes down to Tacit and Explicit Knowledge. I remember talking about this with James when we were at Øredev in Sweden and we talked about how to teach something where we cant express the words. There is a level of explicit knowledge that we can talk about and share, but there's a lot of stuff buried underneath that we can't explain as easily (the tacit knowledge). Getting to the point of transferring that tacit knowledge gets to experience and shared challenges. Most important, it goes to actively thinking about what you are looking at and doing what you can to make for a performance that is both memorable and stands up to scrutiny.

---

As in all of these sessions, there are so many places to go and talks to see, that it is difficult to make decisions on what to see. For that purpose, I am deliberately going to let the WebCAST stream speak for itself. If you can view the WebCAST presentations, please do so. After the recordings are available they will be posted to the AST Channel for later viewing. For that reason, I am going to focus on sessions that I can attend that are not going to be recorded, as well as are relevant to my own interests and aspirations. With that, I was happy to join Alessandra Moreira (@testchick) and her talk "My Manager would Never Go For That", or more succinctly, how to apply context-driven principles to the art and act of persuasion. I always think of the scene in the film "Amadeus" where Mozart is trying to convince the Emperor to let him go forward with the staging and presentation of "The Marriage of Figaro". The Emperor at one point says "You are passionate, Mozart, but you do not persuade". This is a key reminder to me, and I'm guessing Ale is very familiar with this. Being passionate is not enough, we have to persuade others to see the value in what we are passionate about.


Sometimes this comes down to a decision of "should I stay or should I go?". Do I change my organization or do I change my organization? Persuasion may or may not come about, but the odds are, we can do a better job of persuading if we are ourselves willing to be persuaded. Conversations are two way streets. We learn new things all the time. Are we willing to adapt our position given new information? If not, why not? If we think that our way is the best way, and we are not willing to bend with what we are told, why should anyone else be persuaded by us? Influence is not coercion, it's not manipulation, it's a process of guiding and suggesting, offering information and examples, and "walking the walk" that we want to influence in others. there is a three step process in the art of persuasion. First, we need to discover something that we feel is important. Second, we need to prepare, get our ducks in a row so to speak. we need to know and have supporting evidence that we understand what we are doing and that we have a compelling case. From there, we then need to communicate and embark on an honest and frank dialog about what we want to see be an outcome.

In my own experience, I have found that persuasion is much easier if you have already done something on your own and experienced success with it. Sometimes we need to explore options on our own, and see if they are viable. Perhaps we can find one person on our team who can offer a willing ear. I have a few developers on my team who are often willing to help me experiment with new approaches, as long as I am prepared to explain what I want to do and have done my homework up front. People are willing to give you the ability and the benefit of the doubt if you come prepared. If you can present what you do in a way that the person who needs to be persuaded can be convinced that you have worked to be ready for them to do their part, they are much more likely to go along with it. Start small, and get some little successes. that will often get these first few "adopters" on board with you, and then you can move on to others. Over time, you will have proven the worth of your idea (or had it disproved), and you can move forward, or you can refine and regroup to try again.

I'm fortunate in that I have a Manager who is very willing to let me try just about anything if it will help us get to better testing and higher skill, but it's likewise important to do my homework first, as it helps to build my credibility on the topic at hand. Credibility goes a long way to helping persuade others, and credibility takes time to build. With credibility comes believability, and with believability comes a willingness to let you try an idea or experiment. If the experiments are successful in their eyes, they will be more likely to let you do more in those areas you are aiming to persuade. If the experiments fail, do not despair, but it may mean you have to adapt your approach and make sure you understand what you need to do and how it fits in your own organization.

One of the key areas that people fail on when it comes to persuasion is "compromise". Compromise has become a bad word to many. It's not that you are losing, it's that you are willing to work with another person to validate what they are thinking and to see what you are thinking. It also helps to start small, pick one area, or a particular time box, and work out from there.

---

During the lunch break, Trish Khoo stepped on stage to talk about the ideas of "Scaling Up with Embedded Testing", where Trish described a lot of the testing efforts she had been involved in, where the code that she had written was not regarded by any of the programmers, since they felt what she was doing was just some other thing that had to be done so the programmer could do what they needed to do. fast forward to her working in London, where the programmers were talking about how they would test the code they are writing. This was a revelation to her, because up to that point, she had never seen a programmer do any type of testing. Since many of the efforts that she was used to doing were now being taken care by the programmers, that made her role more challenging, so she had to be a lot more inquisitive and aggressive in looking for new areas to explore.

We often think of the developer and tester being responsible for finding and fixing bugs, and the product owner and the tester are responsible for verifying expectations. The bigger challenge is that, with these loops that we enter, we end up chewing up hours, days and weeks constantly going through these cycles of finding and fixing bugs and verifying expectations. Interestingly, when we ask developers to focus on writing tests to help them write better code, the common answer is "yeah, that makes sense, but we don't have time to do that now, we'll do that next sprint", and then they say the same thing next time, if it comes up again. How do we convince an organization to consider a different approach to having developers get more involved in test? She looked to a number of different organizations to see how they did it.

One of the people Trish talked to was Elisabeth Hendrickson of Cloud Foundry/Pivotal. Interestingly, Cloud Foundry does not have a QA department. That's not to say that they do not have testers, but they have programmers who test, and testers who program. There is no wall. Everyone is a programmer, and everyone is a tester. Elisabeth has a tester on the team by the name of Dave Liebreich (Hi Dave ;) ). While he is a tester, he also does as much testing and the programmers, and as much code writing as the programmers.

Another person she talked to was Alan Page of Microsoft (Hi, Alan ;) ). Some of the teams at Microsoft has moved to a model where everyone has dispensed with job titles. Ask Alan what his job title is, he'll say "generic employee" or if pushed, "software engineer". The idea is that they are not confined to a specific specialty. The goal is that, instead of having people in roles, they open up the opportunities for people to do what their skill set and passion provides. the net process is that managers are orchestrating projects based on skill. Instead of hiring "testers who code", the are looking to hire "people who can solve problems with code". The idea that tester is a role is not relevant, everyone codes, everyone tests.

The third case study was with Michael Bachman at Google. In a previous incarnation, Google would outsource a lot of the manual testing with vendors, mostly to look at the front end UI. Much of the coverage that the testers were addressing was ignoring about 90% of the code in play. For Google to stay competitive, they opted to change their organization so that Engineering owned quality as a whole. There was no QA department. Programmers would test, and there was another team called Engineering Productivity, who helped to teach about areas of testing, as well as investing in Software Engineers in Test (SET), who could then help instruct the other programmers in methods related to software testing. The idea with Google was that "Quality is Team Owned, not Test or QA Owned".

What did they all have in common? Efficiency was the main driver. Teams that have gone to this model have done so for efficiency reasons. there are lots of other words associated with this (Education, Feedback, Upskill, Culture, No Safety Net, etc.). One word that is missing, that I'd be curious to see, is effectiveness. Overall, based on the presentation, I would say that effective was also part of this process. Efficiency without effectiveness ultimately will cause an organization to crash and burn. therefore, that means there is value in these changes.

So what does that mean for me as a tester? It means the bar has been raised. We have some new challenges, and we should not be afraid to embrace them. Does that mean that exploratory testing is no longer relevant? Of course not, we still explore when we develop tool assisted testing. We do end up adding some additional skills, and we might be encouraged to write application code, too. As one who doesn't have a "developer" background, that doesn't automatically put me at a disadvantage. It does mean I would be well served to learn a bit about programming and getting involved in that capacity. It may start small, but we all can do some of it if we give it a chance. We may never get good enough at it that we become full time programmers, but this model doesn't really require it. Also, that's three companies out of tens of thousands. It may become a reality for more companies, but rather than be on the tail end of the experience and have it happen to you, perhaps it may help to get in front of the wave and be part of the transition :).

---

The next session I opted to attend was about "The Psychology and Engineering of Testing" which was being presented by Jan Eumann and Ilari Aegerter. Both Jan and Ilari work with eBay, but they are part of the European team, and the European market and engineering realities are different than what goes on in Silicon Valley. There is a group of testers based on London and Berlin that gets software from the U.S. to test, while the European team has software testers embedded into the development team.

Who works in an embedded tester in an Agile team? Overall, they look for individuals with strong engineering skills, but they also want to see the passion, interest and curiosity that helps make an embedded tester formidable. the important distinction that the embedded testers at Europe eBay are not thinking of programming and testing as either/or, but "as well as". They are encouraged to develop a broad set of skills to help solve real problems with the best people who can solve the problems, rather than to focus on just one area.

When Jan and Ben Kelly were embedded within the European teams, there was an initial experience of Testers vs. Programmers, but over time, developers became test infected, and testers became programming savvy along with it. this prompted other teams saying "hey, we want a tester, too".  In this environment, testers and programmers both win.

Though there are integrated teams, the testers still report to testing managers, so while there are still traditional reporting structures, there is a strong interconnected sense between the programmers and testers in their current culture. The Product Test Engineering team has their own Agile manifesto that helps define the integration and importance of the role of test, and how it's a role that is shared through the whole team. If the goal of an embedded tester is to be part of a team, then it makes sense, in Jan's view, to be with the team in space, attitude and purpose. Sitting with the programmers, hanging with the programmers, meeting with the programmers, all of these help to make sure that the tester is involved right from the start.

Additionally, testers can help tech programmers some testing discipline and an understanding of testing principles. Testers bring technical awareness of other domains. They also have the ability to help guide testing efforts in early stage development and help inform and encourage areas that can be set up where programmers might not do so were there not a tester involved. It sounds like an exciting place to be a part of, and an interesting model to aspire to.

---

I love the cross pollination that occurs between the social sciences and software testing, and Huib Schoots has a talk that addresses exactly that.

We often confuse software testing and computer science as though they are hard sciences like mathematics or physics or chemistry. They have principles and components that are similar, but in many ways, the systems that make software are more akin to the social sciences. We think that computers will do the same thing every single time in exactly the same way. fact is, timing, variance, user interactions, congestion and other details all get in the way of the specific factors that would make "experiments" in the computer science domain truly repeatable. I've seen this happen in continuous integration environments, where a series of tests that ran at one time worked the second time they were run, without changing any parameters. One caused the first one to fail and the second one to pass. there can be lots of reasons, but usually they are not physics or mechanical details, but coding and architectural errors. In other words, people making mistakes. Thus, social rather than hard sciences.

Huib shifted over to the ideas in the book "Thinking Fast and Slow", in which simple things are calculated or evaluated very quickly, and other more complicated matters require a different kind of thinking. Karl Mark developed theories about how people should interact, and while the theories he prescribed have been shown to not be ideal, they are still based on the realities of how humans interact with one another. The science of Sociology informs many aspects of the way that we work and interact with others, which generally informs our designs of systems. Levy Strauss represents Anthropology, which deals with the way that different cultures are structured and the parameters that environmental factors that help to inform those options. Maria Montesorri represents Didactics and Pedagogy, aka learning and the methodology that helps inform how we learn. He used his girlfriend to represent Communication studies, and the fact that the way we talk to one another informs the way we design systems, because the communication aspect is often what gets in the way of what an application does (or should I say, the inability to communicate smoothly gets in the way).

Science and Research are areas that inform a great deal of what a software tester actually does. Sadly, very few software testers are really familiar with the scientific method, and without that understanding, many of the options that can help inform test design is missing. I realized this myself several years ago when I stopped considering just listing out a long series of lines of test cases as being effective testing. By going back and considering the scientific method, it gave me the ability to reframe testing as though it were a scientific discipline in and of itself. However, we do ourselves a tremendous disservice if we only use hard science metaphors and ignore the social sciences and what they inform us of how we communicate and interact.

We focus so much attention on trying to prove we are right. that's a misnomer. we cannot prove we are right in anything. We can disprove, but when we say we've proven something, we say we have not found anything that disproves what we have seen. Over time, and repeated observation, we can come close to saying it is "right", but that's only until we get information that disproves it. The theory of gravity seems to be pretty consistent, but hey, I'll keep an open mind ;).

Humans are not rational creatures. We have serious flaws. We have biases we filter everything through. We are emotional creatures. We often do things for completely irrational reasons. We have gut feelings that we trust, even if they fly in the face of what would be considered rational. Sometimes they are write, and sometimes they are wrong, yet we still heed them. Testing needs to work in the human realm. We have to focus on the sticky and bumpy realities of real life, and our testing efforts likewise have to exist in that space.

---

Martin Hynie and Christin Wiedemann focused on a talk that was all about games. Well, to be more specific, "Why testers love playing – Exploring the science behind games". Games are fundamental to the way that we interact with our environment and the way we interact with others. Games help us develop cognitive abilities. The more we play, the more cognitive development occurs. This reminds me a lot of the work and talks I have seen given by Jane McGonigal and how gaming and game culture effects both our thinking and our psychological being.

OK, that's all cool, but what does this have to do with testing?

Testing is greatly informed by the way we interact with systems. We try out ideas and see if they will work based on what we think a system might do. While the specific skills learned from games do not transfer, our way of looking at situations and inspiration that may give us ideas do. Game play is scientifically proven to modify and change our cortical networks. It was fascinating to see the way in which Martin and the other testers on the team approached this as a testing challenge.

The test subjects had their brains scanned while they were playing games, and the results showed that gaming had an actual impact on the gaming brain. Those who played games frequently showed cooler areas of the brain than those who were not playing games. This shows that gaming optimizes neural networks in many cases. Martin also tok this process focusing on a more specific game, i.e. Mastermind, and what that game did to his brain.

So are games good for testers? Jury is out, but the small sample set certainly seems to indicate that yes, there looks to be evidence that games do indeed help testers and that the culture of tester games, and other games, is indeed healthy. Hmmm, I wonder what my brain looks like on Silent Hill... wait, forget I said that, maybe I really don't want to know ;).

---

A great first day, so much fun, so much learning, and now it's time to schmooze and have a good time with the participants. See you all tomorrow!

Saturday, August 9, 2014

From Mid-Town Manhattan, it's #TestRetreatNYC


In the offices of LiquidNet, a group of intrepid, ambitious testers met and decided to discuss what aspects of testing mattered to us. Test Retreat is an Open Space conference, where the sessions begin when the participants want to have it begin (and end), the people who are there are the people who need to be there, and the law of two feet rules. We took some time to discuss topics, pull together similar topics and threads, and then head into our places to talk about the stuff that is burning us up inside.

---

The first session I attended was based around developing a testing community within a city or region. As many of us were part of different communities around the world, there were a variety of experiences, ranging from very small markets with perhaps a hundred testers total, to large regions with hundreds of thousands of testers, but little in the way of community engagement. 

Rich Robinson led the discussion and shared his own experiences with growing and scaling the community in Sydney, Australia. Rich shared that there were three areas that were consistently asked for and looked at as the goals of the attendees: looking for work, finding out the latest trends, and honing and sharpening skills. Rather than try to be all things to all people, they developed a committee and separate initiatives, with committee members focusing on the initiatives that mattered to them. For the group that wanted to get work, they made an initiative called “opportunity seekers”, and focused energy on those who had that as their biggest objective. For those who want to focus on latest trends, they have an avenue for that as well. 

Different regions have unique challenges. In the Bay Area, we have an overabundance of meetups for technical topics, of which a handful of them relate to software testing. As a founder of Bay Area Software Testers (BAST), Curtis Stuehrenberger, Josh Meier and I have chosen to try to focus on areas that are not as often discussed (we tend to steer clear of automation tools and techniques, since there are dozens of other meetup groups that cover those topics). Another challenge is frequency. In some cases, regular and frequent meetings are key. In others, having them less frequently works, but the key is that they meet regularly (monthly, every six weeks, or quarterly seem to be the most common models).

Rich also shared that their initiatives branch away from the formal meetup sessions, and have other opportunities that they initiate that occur outside of the formal meetup times. By having each initiative have people committed to it and resources to help drive those initiatives, buzz gets generated and more people get involved. One of the key things that Rich emphasized was getting people involved and engaged for these initiatives. The Sydney tester group has a committee of ten members that helps make sure that these initiatives are staffed and supported.

Another challenge is the local regions. Some cities have sprawl, others are difficult to get to in a timely manner due to traffic and population density. For example, the San Francisco Bay Area has four general regions: San Francisco (and the upper San Francisco Peninsula, Silicon Valley (and the lower San Francisco Peninsula), the East Bay, and the North Bay. With a few exceptions, people who participate in one generally do not regularly participate in the others. To reach out to a broader community in these sub-regions, it may require using technology and remote-access options for people to participate. 

Ultimately, growing a community takes time, it takes dedicated people, it takes a range of topics that matter to the attendees (including making sure that food and drink is there ;) ). To get those people you want to be involved, it helps to be very specific about what is needed. Saying “I need help with this” is less effective than saying “I need this specific thing to be done at this time for this purpose”. Specificity helps a lot when recruiting helpers.


The next session was “Sleep No More”, presented by Claire Moss, and focused on the model of the performance/play called Sleep No More (which is, in some ways, described as “an immersive performance of Macbeth”). It’s a darkened environment, all participants wear masks, no photography, no talking, just experience as the person sees it. Exploratory Testing, in many ways, has similarities to this particular experience. Claire used a number of cards to help display the ideas, and one of the first ideas she shared was “fortune favors the bold”. Curiosity and a willingness to go in without fear and deal with a substantial amount of “vague” is a huge plus. If you already have that, you have a strong advantage. If this is not natural for you, it can be developed.

Each room in the Sleep No More experience was part of the performance, and at any time, rooms could be empty or have people filter in during the performance. There are “minders” in the event that help to make sure that people don’t completely lose track of where they are. At times, there are very personal experiences that take place based on your tracks and where you go. Claire described a very intense experience of the performance based on where she went and what she observed and chose to follow up on. She also said that, up to this point, no one that she knew had anything like the experience she had.

The experience of Sleep No More was bizarre, creepy, full of strange triggers, and the potential to go into wildly unexpected direction. Software testing in many ways mirrors this experience. While there may be familiar areas and ideas, very often, our choices and angles may take us into very unexpected places. To give an example of the scope of the space, this was in a six story building, in an area that used to be a hotel (and whole areas of the building were gutted, in some spaces multiple floors were open to the air and visible). 

Claire described a feeling of “amazement fatigue”, where the level of stimulus is so high that there is no way to take it all in. The participants have to make conscious choices as to where they will go, and many of the participants will have wildly different experiences. Sometimes, they would follow a character, only to watch them go through a door and close and lock it, so that they couldn’t be followed any longer. This reminds me of following threads of a feature, and being brought to a dead end. People will observe different things, and they will also observe what other people do, and what they focus on. This can give us clues as to areas we want to explore next. 

This experience sounds amazing, and I am definitely interested in going and doing it myself, if time and commitments permit me to do so. Looks like I will be attending the August 10 performance :).


The next session was based on “Leadership”, and Natalie Bennett led the session with the idea that she wanted to see where individuals felt their experiences or needs for leadership were, as opposed to her telling us what she felt about Leadership and how to do it. Questions that Natalie wanted to discuss were:

- What is the purpose of a test team lead?
- What is it for?
What makes it different than being a test manager?

The discussion shifted from there into ways that test team leads and test managers were similar and where they differed. Some of the participants talked about how they led by example, and that they divvied up the work among the group based on the people involved and what they were expected to do. Team leads in general do not have hiring/firing authority, and they typically do not write reviews or have salary decision input. In other environments, the team manager and team lead are one in the same. There are some who are cynical about the effectiveness of this arrangement, while some feel that it is possible to be both a team lead and a team manager.   One attendee who is a Director of Q.A. for her company said that she was “the face of Q.A.” to the organization, and as such, she was setting the direction and expectations for the organization, as well as for her own direct reports.

Team leads are expected to teach and coach the members of their group, as well as be the point of contact for the group. It’s seen as important that they be able to focus on and develop their own role and make it responsive to their own environment. The team lead stands up for the group, and defends them from encroachment of issues and initiatives that are counter-productive to their success. Responsibility and authority tends to be on a sliding scale. Different companies allow for a different level of authority for the leads. Some give a level of authority that is just short of being an actual manager. In others, the leads is considered a “first contact” among equals.

One of the bigger challenges is to deal effectively when team members are failing. Failing in and of itself is not bad. It’s important to learn, and failing is how you learn, but when the failing is chronic or insurmountable, there needs to be a different level of interaction. Lean Coffee, direct mentoring, or even a serious re-consideration of experiences and goals can be hugely beneficial, both for the individual and the team as a whole.


Matt Heusser led a session about “Teaching Testing”, and some of the challenges that we face when we teach software testing to others. When we have an engaged and focused person, this usually isn’t a problem. When the person in question isn’t engaged, or is just going through the motions, then it’s a little more difficult.  The question we focused on at first was “what methods of teaching have worked for you?” Testing is a tactile experience, rather than looking at an abstract questions. We are familiar with questions like “how do you test a stapler?” or “how can you test a Rubik’s Cube?”. The presentation of this challenge may be the most important aspect. For some, they might look at “how do you test a stapler?” as demeaning. They are professionals, what is this going to teach me?

In my experience, one of the things I found to be helpful is to actually spell out how challenging the exercise could be. Rather than ask “How do you test a stapler?”, I might instead say “Tell me the 120 ways that you can test a stapler to confirm it’s fit for use?” This sets a very different expectation. Instead of saying “oh, this is trivial”, by seeding a high number, they may want to try to see how they might be able to meet or exceed that number. They become engaged.

To borrow a bit from Seth Godin, there are two primary goals for everyone. It’s the important aspects that we need to learn, regardless of the discipline. The first is to focus on authentic problems. The second is to be able to lead. Domain knowledge is a huge factor in helping to identify authentic problems. It’s not the only means, but getting to really know the domain can help inform the testing ability. Another important aspect is to understand how people learn, Everyone goes about learning a bit differently. Helping each person learn how they learn can be a huge step in helping to teach them. Sometimes the most ripe area of learning is to wade into an area where people disagree, or where there might be a number of people or groups where there might be dysfunction, where team members don’t talk to each other, or there’s simmering hostility between people. If there’s hostility between two programmers, and they write software that interacts with each other, it’s a good bet that there might be a goldmine of issues between their interaction points (I think this is a very interesting idea, btw :) ) .

Key to teaching testing is the ability to reflect and confirm what has been taught and learned, and for me, I think that Weekend Testing does this very well. The benefit of Weekend Testing, beyond just doing the exercise, is that we can see lightbulbs turning on, and there’s a record of it that others can see and learn from. Creating HowTo’s can also be a helpful mechanism for this. 


This section is the talk that Smita Mishra and I gave about “Hiring Testers and Keeping them Engaged Once We’ve Hired Them”. I recorded this session, and I will transcribe it later ;).


Claire Moss led a session on “Communicating to Management” and we went through and considered a list of questions that are important to frame the conversation(s):

What does quality like to our organization?
Why spend money on testing?
What does testing do?
What value are we getting out of testing?
I read this about QA, and it says we should do this… why aren’t we doing this?” 

These are all questions that we need to be prepared to answer. The question is, how do we do that?

There are several methods we can use, but first and foremost, we need to determine what we need to speak with management about, and if possible, use the opportunities to help educate them about what it is we can do, and at the same time, get a clear understanding about what their view of the world is.

Looking to standards and practices that are helpful can give us guidance, but it doesn’t always represent our reality. Information needs to be specific to explaining where we stand at the given time. Testing is primarily focused on giving quality information to the executives so that they can make qualified decisions. That is first and foremost our mission. Information that we can effectively provide is: 

- Framing of the ecosystem on a global scale (browser standards, trends, data usage histories)
- Impact on customers (client feedback, analytics data)
- Clarify issues and questions (heading off the executive freakout)
- Managing expectations (especially when dealing with something new)
- Explaining how likely issues brought to their attention really are problems worth investing in
- Explaining risk factors and methods to mitigate those risks

—-

At the end of the day we had a lot of new idea, feedback for some new initiatives, an emphasis on better communication, more focused due diligence, and the fact that so many participants had a lot they felt they could contribute. This was a fun and active day, and a lot of learning and connecting. One of the key things I am always impressed about when it comes to these events is that we really have a lot of solid people in the testing community, but we need even more.

I encourage every tester that admires craftsmanship, skill, and thinking make it a point to come to these now annual events (this is the third of these, so I think it’s safe to say that it’s a thing now ;) ). Once again, thanks Matt (Heusser) and Matt (Barcomb) for organizing what has becoming my favorite Open Space event. May there be many more.