Wednesday, August 20, 2014

A TESTHEAD Wayback Machine Find: ALM Forum Talk From April 2014

I am grateful for a variety of friends and acquaintances in the testing world who keep me alert to things they discover. What makes it even more fun is when I'm alerted to things I did and forgot about, or someone discovers something I didn't know was there. Today is one of those days.

Back in April, I gave a talk about "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed". I posted the slides on my LinkedIn profile and my SlideShare account, and then went on with my reality. It was pointed out that a video of my talk was recorded, and now that I know where it is, I can share it here :).

http://vimeopro.com/user27088109/almforum14test/video/95238828


I give a shout out to SummerQAmp, PerScholas, Weekend Testing, and Miagi-do as exemplars as to what we can do to help empower future testers with real skills. There are many others, to be sure, but these are the one's I'm actively engaged in, so hey, I'm biased :).

I hope you enjoy the talk, and if you do, please share the message with others.

Friday, August 15, 2014

Coyote Teaching: Watch How It All Came Together

Harrison Lovell and I decided to try an experiment.

What if a mentoring pair (a person relatively new to the software testing world and a longtime practitioner) were to work together and look at the way that mentoring is performed?

Could we learn something in the process?

What if we tried something novel, and looked at mentoring relationship all over the world, both current and ancient?

What would we find, and could we learn from them in a way that might prove to be useful to us today?


With that, we embarked on a several month voyage (mostly performed over Skype and email) and decided we'd give a try at a method that takes its cues from ancient cultures. Those methods are called "Coyote Teaching", and we opted to be the Coyotes :).

During CAST 2014, which was held this past week at the Helen and Martin Kimmel Center in New York City, we had the chance to present this topic and approach, and Huib Schoots, a friend of ours, was kind enough to record the whole talk. For those who would like to see it, it is here in its entirety:



I want to congratulate Harrison on his first conference talk, and to thank him for his enthusiasm, as well as his hospitality while showing me around mid-town Manhattan during the week (I should also mention that this was the first opportunity we have had to meet face to face).


I am also thankful to those who gave us valuable feedback to make the talk even better than we originally envisioned. My thanks especially to Alessandra Moreira for helping me go over the fine points of the talk and acting as the counter debater to help poke holes in the ideas we were going to present.

With that, please watch "Coyote Teaching: a New (Old?) Take on the Art of Mentorship". If you like what you see, please comment below and let us know what you liked. If you don't like what you see, please comment below and tell us that, too :). Either way, we'd love to hear what you think.

Wednesday, August 13, 2014

The Conferring Continues at #CAST2014


Hello everyone! It's wild to think that my week in New York will be ending tomorrow morning. I've had so many great experiences, conversations and interactions with so many great people. I've met both of my PerScholas mentees, and I've enjoyed watching them take in this experience. It's also been great fun talking to so many new friends, and I will genuinely miss this "gathering of the tribe", but let's not lament leaving when there is a whole day of interaction and conferring (not to mention my own talk ;) ) still to happen.

Lean Coffee this morning dealt with some interesting challenges in that the water main broke out in front of the Marlton. As such, the water for most of 8th Street was shut off. Water only got restored a little after 7:00 a.m., so the coffee part of Lean Coffee for the participants is just starting to come in.

One of the topics we started with was the transient nature of software testing, and why we see so few people who come into testing stay with testing. There are lots of reasons for this. In my career, many of the people I have met who were software testers have gone on to do other things. Some became programmers, some became project managers, some became system administrators, and some became managers. Of the people I knew who were testers for more than five years, most of them remained testers going forward. that's just my reaction, but I think that, by the time people have reached the five year mark as testers, they decided that they either liked testing, or they were good at testing. Of course, this may be an observation bias, since I am seeing people who followed my own path. It's an interesting topic to look at further.

The second topic was focusing on how software testers train and learn about their jobs, and how they find the time to do it. For many of the people, they make an effort to carve out time that will be relevant to them. For me personally, I like to use my time on the train when I commute to and from work to read, think, and ponder ideas. Others use materials like Coursera or uTest University. Some people enjoy writing blogs (like me ;) ). Key takeaway, everyone has a different way to learn.

The third topic focused on how people get through the large amount of material at a conference. How do they capture it all? there's lots of techniques. My favorite technique is writing blog posts in semi-real time (like this one). I find that, instead of writing a summary of what I am hearing, I try to write a summary and a personal take on the details I have learned, so that it is more personal and actionable. Others use sketch notes, doodles, mind maps, recordings of talks, etc. The key takeaway is not the method in which you capture, but that you make what you capture actionable.

The fourth topic is moving into consulting or contracting, and what it takes to make that work. Several of the participants shared their experiences as to how they made the transition, and the challenges they faced. In addition to doing the work, they had to work with finding gigs, collecting money, doing paperwork and tax filing, etc. At times, there will be unusual fits, and needs that may or may not be an ideal situation, but a benefit of being a consultant is that you are there for a specific purpose and for a specific time, and at the end of it, you can leave. There's a need to be able to deal with a high level of ambiguity, and that ambiguity tends to be a big hurdle for many. On the other side, there is a mindset of learning and continuous pivoting, where there's always something new to learn. the pay can be good, but it can be sporadic. Ultimately, at the end of the day, we all are consultants, even if we are employed by a company and we are getting a paycheck.

The final topic is "are numbers evil or maligned". Some people look at numbers as a horrible waste of time. Don't count test cases, don't count bugs, don't count story points. In some ways, this quantitative accounting is both aggravating, but somewhat necessary. Numbers provide information. Whether that information is bad or good depends a lot on what we want to measure. If we are dealing with things that are consistent (network throughput, megabytes of download, etc.) collecting those numbers matters. Numbers of test cases, number of stories, etc. are much less helpful because there is such a variation as to what and how we can control those values. When the measurement cannot be actionable, or it's really vague, then the numbers don't really make a difference. Numbers can be informative, or they can be noise. It's up to our organizations (and us) to figure out what is relevant and why.

Thanks to all the participants for being involved at this early hour :).

---

The morning keynote for today was delivered by Carol Strohecker of the Rhode Island School of Design, and the topic was "From STEM to STEAM: Advocacy to Curricula" and the focus was on the fact that, while we have been emphasizing STEM (science, Technology, Engineering and Mathematics), we miss a lot of important details when we do not include "Art" in that process.

STEAM emphasizes the importance of art and design and how it is important to innovation and economic growth. Tied into this is the maker movement, which likewise emphasizes not just the functional but the aesthetic. There's a lot of neat efforts and initiatives taking place at the academic level as well as the legislative level to get these initiatives into schools so that we can emphasize this balance.

There are many tools that we can use to help us look at the world in a more artistic and aesthetic approach. Art is nebulous and subjective. It does not have the same level of solid concrete syntax that science or language has (and language is pushing it). Much of the variance in the artistic leads to a development of a particular skill or attribute that we call "intuition". It's not concrete, it's not focused, it's not based on hard data, but it informs in ways that are just as valuable. Artistic endeavors help to develop these traits. Cutting the arts out of our economic vision puts us at a significant disadvantage.

[I had to duck out at this point to take care of some AST business, so I can only give you my take on the actual details I heard discussed. Sorry for the gap.]

One of the quotes shared that makes great sense to me comes from Immanuel Kant:

"The intellect can intuit nothing, the senses can think nothing. Only through their union can knowledge arise."

It feels like this talk is resonating with many people, as the Open Season for this talk is vigorous and active. An emphasis on synthesizing the inputs and the areas that we interact with makes for a richer and greater whole. It takes a different level of thinking to make it happen, but there are amazing opportunities for those willing to stretch into new disciplines. Many books have been suggested, including "Inventing Kindergarten", which talks about the importance of play and discovery to learning and skill acquisition/development.

James Bach made an interesting comment. With the emphasis on STEM, and now STEAM, who gets left behind now? Are there other areas that are now orphaned and unfunded, or is it that these areas  have been unfunded for so long that they cry out for help? Does it make sense to work with a small group, or do we need to consider that STEAM is meant to be an all encompassing discipline focus?


---


Up next is mine and Harrison's talk... for obvious reasons, I will not be live blogging then ;). I do however encourage everyone to tweet comments or even ask questions, and I'll be happy to follow up and answer them :).


---

During lunch we had the results of the test challenge and we also had the results of the Board of Directors election.

Returning members of the board:
- Markus Gärtner
- Keith Klain (re-elected)
- Michael Larsen
- Pete Walen

New Members:
- Erik Davis
- Alessandra Moreira
- Justin Rohrman

Congratulations to everyone, I think 2014-2015 will be an awesome year :).

Ben Simo took the stage after lunch to talk about the messy rollout of the healthcare.gov web site and all of the problems that he alone was able to find with the site. Ben made a blog to record the issues that he found, and that received a *lot* of attention from the media and from the government as well.

For the details of each of the areas that he explored, you can see the examples he posted on http://blog.isthereaproblemhere.com/. What I found interesting was the fact that as Ben tested and logged his discoveries, it showed just how messed up so many of the areas were, and how much of the efforts Ben applied helped discover some strange issues without even trying. Ben was not asked to speak to Congress or to testify, and he did not find that there was any government action from his efforts, but he became the target of DDOS attacks and media outlets were calling him very regularly.

Ben has on the side of the "is there a problem here" site a variety of test heuristics are listed, and he applied most of those heuristics to help uncover the bugs he found. Many of the issues discovered fit into the specific heuristics. Listed here they are:

CONSISTENCY HEURISTICS
from James Bach and Michael Bolton

H istory
I mage
C omparable Products
C laims
U ser Expectations
P roduct
P urpose
S tatutes
-F amiliar

FAILURE HEURISTICS
from Ben Simo

F unctional
A ppropriate
I mpact
L og
U ser Interface
R ecovery
E motions

SECURITY VULNERABILITIES
the OWASP Top 10

1. Injection
2. Broken authentication and session management
3. Cross-Site Scripting
4. Insecure Direct Object References
5. Security Misconfiguration
6. Sensitive Data Exposure
7. Missing Function Level Access Control
8. Cross-Site Request Forgery
9. Using Components with Known Vulnerabilities
10. Unvalidated Redirects and Forwards

What was also amazing to see was that all of these issues were discovered/reported using nothing more than this own data. He said he would not try to do anything to access other people's information, and he did not. even then, he still found plenty of issues that should have made the healtcare.gov team both very nervous, and very grateful.

---

Geoff Loken tickled my interest with the talk titled "The history of reason; arts, science, and testing". As one who finds philosophy and all of its iterations over the millenia fascinating, I've found the different ways that reason and intuition developed over the ages to be a worthwhile area to study and learn about. Ultimately, philosophy comes down to epistemology, and epistemology led to the scientific method (observation/conjecture, hypothesis, prediction, testing, analysis).

Observation cannot tell us everything. There are things we just can't see, hear, touch, smell, or taste. at those times, we have to use reasoning skills to go farther. The scientific method does not prove that things exist, but it can disprove to a point the nature of an items existence.

Geoff played a clip from Monty Python and the Holy Grail (the witch scene) which showed what could be considered a case of bad test design. Ironically, based on the laws and understanding of the world at the time, they performed tests that were actually in line with their standards of rigor. We simply have codified our understanding of more disciplines since their time.

One of the other aspects that comes into play when we are testing is that science can quantify, but it can't qualify. There's aspects to testing that we need to look at that go beyond the very definable and specific data aspects.

Overall, I found this to be a lot of fun to discuss, and it reminded me of many of the historical dilemmas that we have faced over the centuries. We look at what appears to be totally irrational actions in centuries past. It makes me wonder what from our current time will look irrational 500 years from now ;).

---

The last session I attended today was Justin Rohrman's talk "Looking to Social Science for Help With Metrics". Metrics is considered a dirty word in many places, and that disparaging attitude is not entirely unjustified. Metrics are not entirely useless, but the measurement of them in the right context is important. If they are not used in the correct context, they can be benign at best and downright counter-productive.

By focusing on the metrics that actually matter, we can look at measurements that can tell us how to learn about the systems we use and learn where we are in the life of the product. Some of the context-driven measurements that we could/should be looking at include:


  • work in progress
  • cycle time
  • lead tile
  • touch time
  • slack
  • takt time
  • time slicing
  • variation
  • Find -> Fix -> Retest loop

These measurements intrigue me, and they seem to be much more in line with what could actually help an organization. These fit well into what is referred to as the Lean Model.  Lean focuses on measurement for improvement. This is in contrast with using not to be confused with measurement for control.

I'm currently fortunate to work for a company that does not require me to chase down a number of useless metrics, though we have a few core metrics that we look at. These examples give me hope that we can get even more focused on measurement values that are meaningful. I'll definitely bring the context-driven list to my engineering team, or better yet, try to see if I can derive them and report them myself :).

---

Tim Coulter and Paul Holland wandered about the venue and checked out a bunch of the talks, and he decided to check out as many of the talks as he could to share some "TimBITS" and takeaways.  Some takeaways:

- Direct your testing keeping the business goals in mind
- Tighten up the feedback loop, it will make everyone happy
- Write your bug reports as if it were for a  memory-wiped feature you
- When you add a test specialist into a development team, everybody wins
- Skills atrophy: Testing skills must be used or you will lose them
- Social sciences all play a part in testing, it's not just technology
- Testers appear to be hardwired to play games
- Art has an impact on Software Testing
- Good mentoring is hard. Answer your mentees questions with more questions
- It's a sin to test mobile apps sitting at your desk
- To be a good tester you need to be a "sort of skilled hacker"
- Pair with people in all roles they will all give you different insights
- You can't prove quality with science, but you can prove facts that may alter judgement
- Try to show the need for what you want to teach before you try to teach it
- "Hey, these aren't just testers but real people!"

Many of the participants are sharing their own takeaways, and they are covering many of the ones already mentioned, but they are showing that many people are seeing that "there is an amazing community that looks out for one another and actively encourages each other to do their best work".

---

It's been fun, but all good things must come to an end. We are now at the closing keynote, given by Matt Heusser, and the title for this one is "Software Testing State of the Practice (And Art! And Science!)".

First thing that Matt says that he is seeing is that there is a swing back to programming, and back again. The debate of what testers should be doing, what kind of work they should be doing, and who should be doing it is still raging. Should testers be programmers? Some will embrace that, some will fight it, and some will find a place in the middle.

Automation and tech is increasing in our lives (there are supermarkets where self-checkout is becoming the norm). The problem with this prevalence is that we are losing the human touch and interaction. Another issue is that testing looks to be a transient career. Seven years from now, it's entirely likely that more than 50% of the people who are here attending their first CAST will not even be in test 7 years from now.

Another issue that we are seeing is the Fragmentation within Testing. What does test even mean? What is testing, and who has the final say on the actual definition? We have a wide variety of codebases and strategies of how we code and how we test. All of this leads to a discipline that many people don't really understand what it is that the test teams do. We are a check box that needs to be marked.

The Agile community in the early 2000s we in a similar situation, and they dared to suggest they had a better way to write software. That became the Agile movement, and that's changed much of the software development world. Scrum is now the default development environment for a large percentage of the software development population. Scrum calls for testing to be done by embedded members of the team, and not by a separate entity.

Matt refers to three words that he felt would change the world of software testing. Those words are:

- Honesty
- Capable
- Reach

We need to be honest with our dealings and we need to show and demonstrate integrity. We need to prove that we are capable and competent, and we need to reach out to those who don't understand what we do or why we do it.

I added a comment to this talk in open season when asked about why testers tend to move out of testing, and I think that there's something to be said that testers are broad generalists. We have to be. We need to look at the product from a wide variety of angles, and because of that, we have a broad skill set that allows us to pivot into different positions, either temporarily or as a new job. I personally have done stints as a network engineer, an application engineer, a customer support representative, and even a little tech writing and training, all the while having software testing as a majority of my job or as a peripheral component of it. I'm sure I'm not an isolated incident.

There's a lot to be said about the fact that we are a community that offers a lot to each other, and we are typically really good at that (giving back to others). As I said in a tweet reply, if we inspire you, please bring the message back to your friends or your team. Let them know we are here :).


Tuesday, August 12, 2014

Live from New York, It's... #CAST2014


Hey! Finally, I'm able to work that silly line a little more literally ;). Yesterday was the workshop day for CAST, but today is the first full general conference day, so I will be live blogging as much as I can of the proceedings, if WiFi will let me.

CAST 2014 is being held at the New York University Kimmel Center, which is just outside of the perimeter of Washington Square. the pre-game show for the conference (or one of them, in any event) is taking place at the Marlton Hotel in the lobby. Jason Coutu is leading a Lean Coffee event, in which all of the participants get together and vote on topics they want to talk about, and then discussion revolves around the topics that get the most votes.

---

This morning we started out with the topic of capacity planning and the attempt to manage and predict how to plan for capacity planning. Variance in teams can be dramatic (a three person test team is going to have lower capacity than a fifty or one hundred person team. One interesting factor is that estimation for stories is often wrong. Story points and stuff like that often regress to the mean. One attendee asked "what would happen if we just got rid of the points for stories altogether?" Instead of looking at points for stories, we should be looking at ways to get stories to the same size in general. It may be a gut feeling, it may be a time heuristics, but the effort may be better suited to just making the stories smaller, rather than get to involved in adding up points.

Another interesting measurement is "mean time to fix the build".  Another idea is to see which files get checked out and checked in most frequently to see where the largest amount of modifications are taking place and how often. Some organizations look to measure everything they can measure. One quip was "are they measuring how much time they are spending measuring?". While some measurements are red herrings, often there are valid areas that it makes sense to measure and learn what is needed to remedy. A general consensus is that the desire to get lots of finely granulated measurements is less effective than just targeting effort to fix issues and getting the release to be stable and fixing the issues as they happen.

---

Another topic is the challenge of what happens when a company has devalued or lost the exploratory tester skill set due to focusing on "technical testers". A debate came up to see what "technical tester" actually means, and in general, it was agreed that a technical tester is a programmer that writes or maintains automated testing suites, and that they meet the same level/bar that the software engineers meet. The question is, what is being lost by having this be the primary focus? Is it possible that we are missing a wonderful opportunity to work with individuals who are not necessarily technical, or that is not their primary focus. I consider myself a somewhat "technical tester", but I much prefer/enjoy working in an environment where I can do both technical and exploratory testing. A comment was raised that perhaps "technical tester" is limiting. Technically aware might be a better term, in that the need for technical skills is rising everywhere, not just in the testing space.

---

The last topic we covered was "testing kata" and this is a topic that is of great interest to me because of thoughts that we who are instructors in Miagi-do have been considering implementing. My personal desire is to see the development of a variety of kata that we can put together and use in a larger sphere. In martial arts (specificially, Aikido), there is the concept of "Randori", which is an open combat scenario where the participant has multiple chalengers, and needs to use the kata skills they have learned. The kata part, we have a lot of examples. The randori, that's an open area that is ripe for discussion. The question is, how to put it into practice? I'd *love* to hear other people's thoughts on that :).

---

After breakfast, we got things started with Keith Klain welcoming everyone to the event, and Rich Robinson explaining the facilitation process. As many know (and I guess a few don't ;) ) CAST uses a facilitation method that optimizes the way that the audience can participate. The K-cards that we give out let people determine how and where they can question and make sure everyone who wants a say can get their say.

James Bach is the first keynote, and he has opened up with the reality that talking about testing is very challenging. It's hard enough to talk to testing with other testers, but talking to people who are not testers? Fuhgeddaboudit! Well, no, not really, but it sure feels that way. Talking about testing is a challenging endeavor, and very often, there is a delayed reaction (Analytical Lag Time) where the testing you do one day comes together and gives you insights an hour or a day later. These are maddening experiences, but they are very powerful if we are aware of them and know how to harness them. The title of James' talk is "Testing is Not Test Cases (Toward a Performance Culture)". James started by looking at a product that would allow a writer working on a novel the change and modify "scene" order. The avenues that James was showing looked like classic exploratory techniques, but there is a natural ebb and flow to testing. The role of test cases is almost negligible. The thinking process is constantly evolving. In many ways, the process of testing is writing the test cases while you are testing. Most of the details of the test cases are of minor importance. The actual process of thinking about and pushing the application is hugely complex. An interesting point James makes is that there is not test for "it works". All I know for sure is that it has failed at this point in time.

The testing we perform can be informed by some basic ideas, some quick suggestions, and then following the threads that those suggestions give to us. Every act of testing (genuine testing) involves several layers. there's a narrative or explanation, there's an enactment of test ideas, there's the knowledge of the product that we have. there's the tester's role and integration in the team, there's the skill the tester brings to the table, and there's the tester's demeanor and temperament. All of these aspects come to play and help inform us as to what we can do when we test.

The act of testing is a performance. It can't truly be automated, or put into a sequence of steps that anyone can do. That's like expecting that we can get a sequence of steps so that anyone can step in and be Paul Stanley of KISS. We all can sing the lyrics or play the chords if we know them, but the whole package, the whole performance, cannot be duplicated, not that there aren't many tribute band performers that really try ;).

James shared the variety of processes that he uses to test. He shared the idea of a Lévy Flight, where we sample and cover a space very meticulously, then we jump up and "fly" to some other location and then do another meticulous search. the Lévy Flight heuristic is meant to represent the way that birds and insects scour areas, then fly off at what looks like a random manner, and then meticulously searching again for food, water, etc. From a distance, it seems random, but if we observe closely, we see that even the random fly around is no random at all, but instead it's a systematic method of exploration. Other areas James talks about are modeling from observations, factoring based on the product, experiment design and using tools that can support the test heuristic.

James created a "spec" based on his observations, but recognizes that his observations could be wrong, so he will look to share these options with a programmer to make sure that his observations match the intended reality. there is a certain amount of conjecture here. Because of that, precise speech is important. If we are vague, we can give leeway for others to say "yes, those are correct assumptions of the project". the more specific, the less likely that wiggle room will be there. Issues will be highlighted and easier to confirm as issues if we are consistent with the terms we use. The test cases are not all that interesting or important. The "testers" spec and questions we develop and present at the end is it. However, just as Paul Stanley singing "Got to Choose" at Cobo Hall in Michigan in 1975, the performance of the same song in Los angeles in 1977 will not sound exactly the same. Likewise, a testing round a week later may produce a totally different document,with similarities, but perhaps fundamental differences, too.

Does this all mean that test cases are completely irrelevant and useless? No, but we need to put them in the hierarchy they actually belong. There is a level of focus and ways that we want to interact with the system. Having a list of areas to look at so as to not forget where we want to go certainly helps. Walking through a city is immensely more helpful if we have a map, but it's not entirely essential. we can intuit from street names and address numbers, we can walk and explore, we can ask for directions from people we meet, etc. Will the map help us get directly where we want to go? Maybe. Will following the map show us everything we want to see? Perhaps not. Having a willingness to go in various directions because we've seen something interesting will tell us much more than following the map to the latter. So it is with test cases. They are a map. They are a suggested way to go. They are not *THE* way to go.

Ultimately, it comes down to the fact that testing is a performance. Fretting about test cases gets in the way of the performance. Back to watching KISS (or The Cure of The Weeknd if we want to be more inclusive), they have songs, they have lyrics, they have emotive passages, but at the end of the day, an instance of a performance can be encoded, but it represents only one instance of time. Every performance is different, every performance is on a continuum. You can capture a single performance, but not al of the performances that can be made. Cases can guide us, but if we want to perform optimally, we have to get beyond the test cases. We can capture the actual notes and words. There is no automation for "showmanship" ;).

This comes down to Tacit and Explicit Knowledge. I remember talking about this with James when we were at Øredev in Sweden and we talked about how to teach something where we cant express the words. There is a level of explicit knowledge that we can talk about and share, but there's a lot of stuff buried underneath that we can't explain as easily (the tacit knowledge). Getting to the point of transferring that tacit knowledge gets to experience and shared challenges. Most important, it goes to actively thinking about what you are looking at and doing what you can to make for a performance that is both memorable and stands up to scrutiny.

---

As in all of these sessions, there are so many places to go and talks to see, that it is difficult to make decisions on what to see. For that purpose, I am deliberately going to let the WebCAST stream speak for itself. If you can view the WebCAST presentations, please do so. After the recordings are available they will be posted to the AST Channel for later viewing. For that reason, I am going to focus on sessions that I can attend that are not going to be recorded, as well as are relevant to my own interests and aspirations. With that, I was happy to join Alessandra Moreira (@testchick) and her talk "My Manager would Never Go For That", or more succinctly, how to apply context-driven principles to the art and act of persuasion. I always think of the scene in the film "Amadeus" where Mozart is trying to convince the Emperor to let him go forward with the staging and presentation of "The Marriage of Figaro". The Emperor at one point says "You are passionate, Mozart, but you do not persuade". This is a key reminder to me, and I'm guessing Ale is very familiar with this. Being passionate is not enough, we have to persuade others to see the value in what we are passionate about.


Sometimes this comes down to a decision of "should I stay or should I go?". Do I change my organization or do I change my organization? Persuasion may or may not come about, but the odds are, we can do a better job of persuading if we are ourselves willing to be persuaded. Conversations are two way streets. We learn new things all the time. Are we willing to adapt our position given new information? If not, why not? If we think that our way is the best way, and we are not willing to bend with what we are told, why should anyone else be persuaded by us? Influence is not coercion, it's not manipulation, it's a process of guiding and suggesting, offering information and examples, and "walking the walk" that we want to influence in others. there is a three step process in the art of persuasion. First, we need to discover something that we feel is important. Second, we need to prepare, get our ducks in a row so to speak. we need to know and have supporting evidence that we understand what we are doing and that we have a compelling case. From there, we then need to communicate and embark on an honest and frank dialog about what we want to see be an outcome.

In my own experience, I have found that persuasion is much easier if you have already done something on your own and experienced success with it. Sometimes we need to explore options on our own, and see if they are viable. Perhaps we can find one person on our team who can offer a willing ear. I have a few developers on my team who are often willing to help me experiment with new approaches, as long as I am prepared to explain what I want to do and have done my homework up front. People are willing to give you the ability and the benefit of the doubt if you come prepared. If you can present what you do in a way that the person who needs to be persuaded can be convinced that you have worked to be ready for them to do their part, they are much more likely to go along with it. Start small, and get some little successes. that will often get these first few "adopters" on board with you, and then you can move on to others. Over time, you will have proven the worth of your idea (or had it disproved), and you can move forward, or you can refine and regroup to try again.

I'm fortunate in that I have a Manager who is very willing to let me try just about anything if it will help us get to better testing and higher skill, but it's likewise important to do my homework first, as it helps to build my credibility on the topic at hand. Credibility goes a long way to helping persuade others, and credibility takes time to build. With credibility comes believability, and with believability comes a willingness to let you try an idea or experiment. If the experiments are successful in their eyes, they will be more likely to let you do more in those areas you are aiming to persuade. If the experiments fail, do not despair, but it may mean you have to adapt your approach and make sure you understand what you need to do and how it fits in your own organization.

One of the key areas that people fail on when it comes to persuasion is "compromise". Compromise has become a bad word to many. It's not that you are losing, it's that you are willing to work with another person to validate what they are thinking and to see what you are thinking. It also helps to start small, pick one area, or a particular time box, and work out from there.

---

During the lunch break, Trish Khoo stepped on stage to talk about the ideas of "Scaling Up with Embedded Testing", where Trish described a lot of the testing efforts she had been involved in, where the code that she had written was not regarded by any of the programmers, since they felt what she was doing was just some other thing that had to be done so the programmer could do what they needed to do. fast forward to her working in London, where the programmers were talking about how they would test the code they are writing. This was a revelation to her, because up to that point, she had never seen a programmer do any type of testing. Since many of the efforts that she was used to doing were now being taken care by the programmers, that made her role more challenging, so she had to be a lot more inquisitive and aggressive in looking for new areas to explore.

We often think of the developer and tester being responsible for finding and fixing bugs, and the product owner and the tester are responsible for verifying expectations. The bigger challenge is that, with these loops that we enter, we end up chewing up hours, days and weeks constantly going through these cycles of finding and fixing bugs and verifying expectations. Interestingly, when we ask developers to focus on writing tests to help them write better code, the common answer is "yeah, that makes sense, but we don't have time to do that now, we'll do that next sprint", and then they say the same thing next time, if it comes up again. How do we convince an organization to consider a different approach to having developers get more involved in test? She looked to a number of different organizations to see how they did it.

One of the people Trish talked to was Elisabeth Hendrickson of Cloud Foundry/Pivotal. Interestingly, Cloud Foundry does not have a QA department. That's not to say that they do not have testers, but they have programmers who test, and testers who program. There is no wall. Everyone is a programmer, and everyone is a tester. Elisabeth has a tester on the team by the name of Dave Liebreich (Hi Dave ;) ). While he is a tester, he also does as much testing and the programmers, and as much code writing as the programmers.

Another person she talked to was Alan Page of Microsoft (Hi, Alan ;) ). Some of the teams at Microsoft has moved to a model where everyone has dispensed with job titles. Ask Alan what his job title is, he'll say "generic employee" or if pushed, "software engineer". The idea is that they are not confined to a specific specialty. The goal is that, instead of having people in roles, they open up the opportunities for people to do what their skill set and passion provides. the net process is that managers are orchestrating projects based on skill. Instead of hiring "testers who code", the are looking to hire "people who can solve problems with code". The idea that tester is a role is not relevant, everyone codes, everyone tests.

The third case study was with Michael Bachman at Google. In a previous incarnation, Google would outsource a lot of the manual testing with vendors, mostly to look at the front end UI. Much of the coverage that the testers were addressing was ignoring about 90% of the code in play. For Google to stay competitive, they opted to change their organization so that Engineering owned quality as a whole. There was no QA department. Programmers would test, and there was another team called Engineering Productivity, who helped to teach about areas of testing, as well as investing in Software Engineers in Test (SET), who could then help instruct the other programmers in methods related to software testing. The idea with Google was that "Quality is Team Owned, not Test or QA Owned".

What did they all have in common? Efficiency was the main driver. Teams that have gone to this model have done so for efficiency reasons. there are lots of other words associated with this (Education, Feedback, Upskill, Culture, No Safety Net, etc.). One word that is missing, that I'd be curious to see, is effectiveness. Overall, based on the presentation, I would say that effective was also part of this process. Efficiency without effectiveness ultimately will cause an organization to crash and burn. therefore, that means there is value in these changes.

So what does that mean for me as a tester? It means the bar has been raised. We have some new challenges, and we should not be afraid to embrace them. Does that mean that exploratory testing is no longer relevant? Of course not, we still explore when we develop tool assisted testing. We do end up adding some additional skills, and we might be encouraged to write application code, too. As one who doesn't have a "developer" background, that doesn't automatically put me at a disadvantage. It does mean I would be well served to learn a bit about programming and getting involved in that capacity. It may start small, but we all can do some of it if we give it a chance. We may never get good enough at it that we become full time programmers, but this model doesn't really require it. Also, that's three companies out of tens of thousands. It may become a reality for more companies, but rather than be on the tail end of the experience and have it happen to you, perhaps it may help to get in front of the wave and be part of the transition :).

---

The next session I opted to attend was about "The Psychology and Engineering of Testing" which was being presented by Jan Eumann and Ilari Aegerter. Both Jan and Ilari work with eBay, but they are part of the European team, and the European market and engineering realities are different than what goes on in Silicon Valley. There is a group of testers based on London and Berlin that gets software from the U.S. to test, while the European team has software testers embedded into the development team.

Who works in an embedded tester in an Agile team? Overall, they look for individuals with strong engineering skills, but they also want to see the passion, interest and curiosity that helps make an embedded tester formidable. the important distinction that the embedded testers at Europe eBay are not thinking of programming and testing as either/or, but "as well as". They are encouraged to develop a broad set of skills to help solve real problems with the best people who can solve the problems, rather than to focus on just one area.

When Jan and Ben Kelly were embedded within the European teams, there was an initial experience of Testers vs. Programmers, but over time, developers became test infected, and testers became programming savvy along with it. this prompted other teams saying "hey, we want a tester, too".  In this environment, testers and programmers both win.

Though there are integrated teams, the testers still report to testing managers, so while there are still traditional reporting structures, there is a strong interconnected sense between the programmers and testers in their current culture. The Product Test Engineering team has their own Agile manifesto that helps define the integration and importance of the role of test, and how it's a role that is shared through the whole team. If the goal of an embedded tester is to be part of a team, then it makes sense, in Jan's view, to be with the team in space, attitude and purpose. Sitting with the programmers, hanging with the programmers, meeting with the programmers, all of these help to make sure that the tester is involved right from the start.

Additionally, testers can help tech programmers some testing discipline and an understanding of testing principles. Testers bring technical awareness of other domains. They also have the ability to help guide testing efforts in early stage development and help inform and encourage areas that can be set up where programmers might not do so were there not a tester involved. It sounds like an exciting place to be a part of, and an interesting model to aspire to.

---

I love the cross pollination that occurs between the social sciences and software testing, and Huib Schoots has a talk that addresses exactly that.

We often confuse software testing and computer science as though they are hard sciences like mathematics or physics or chemistry. They have principles and components that are similar, but in many ways, the systems that make software are more akin to the social sciences. We think that computers will do the same thing every single time in exactly the same way. fact is, timing, variance, user interactions, congestion and other details all get in the way of the specific factors that would make "experiments" in the computer science domain truly repeatable. I've seen this happen in continuous integration environments, where a series of tests that ran at one time worked the second time they were run, without changing any parameters. One caused the first one to fail and the second one to pass. there can be lots of reasons, but usually they are not physics or mechanical details, but coding and architectural errors. In other words, people making mistakes. Thus, social rather than hard sciences.

Huib shifted over to the ideas in the book "Thinking Fast and Slow", in which simple things are calculated or evaluated very quickly, and other more complicated matters require a different kind of thinking. Karl Mark developed theories about how people should interact, and while the theories he prescribed have been shown to not be ideal, they are still based on the realities of how humans interact with one another. The science of Sociology informs many aspects of the way that we work and interact with others, which generally informs our designs of systems. Levy Strauss represents Anthropology, which deals with the way that different cultures are structured and the parameters that environmental factors that help to inform those options. Maria Montesorri represents Didactics and Pedagogy, aka learning and the methodology that helps inform how we learn. He used his girlfriend to represent Communication studies, and the fact that the way we talk to one another informs the way we design systems, because the communication aspect is often what gets in the way of what an application does (or should I say, the inability to communicate smoothly gets in the way).

Science and Research are areas that inform a great deal of what a software tester actually does. Sadly, very few software testers are really familiar with the scientific method, and without that understanding, many of the options that can help inform test design is missing. I realized this myself several years ago when I stopped considering just listing out a long series of lines of test cases as being effective testing. By going back and considering the scientific method, it gave me the ability to reframe testing as though it were a scientific discipline in and of itself. However, we do ourselves a tremendous disservice if we only use hard science metaphors and ignore the social sciences and what they inform us of how we communicate and interact.

We focus so much attention on trying to prove we are right. that's a misnomer. we cannot prove we are right in anything. We can disprove, but when we say we've proven something, we say we have not found anything that disproves what we have seen. Over time, and repeated observation, we can come close to saying it is "right", but that's only until we get information that disproves it. The theory of gravity seems to be pretty consistent, but hey, I'll keep an open mind ;).

Humans are not rational creatures. We have serious flaws. We have biases we filter everything through. We are emotional creatures. We often do things for completely irrational reasons. We have gut feelings that we trust, even if they fly in the face of what would be considered rational. Sometimes they are write, and sometimes they are wrong, yet we still heed them. Testing needs to work in the human realm. We have to focus on the sticky and bumpy realities of real life, and our testing efforts likewise have to exist in that space.

---

Martin Hynie and Christin Wiedemann focused on a talk that was all about games. Well, to be more specific, "Why testers love playing – Exploring the science behind games". Games are fundamental to the way that we interact with our environment and the way we interact with others. Games help us develop cognitive abilities. The more we play, the more cognitive development occurs. This reminds me a lot of the work and talks I have seen given by Jane McGonigal and how gaming and game culture effects both our thinking and our psychological being.

OK, that's all cool, but what does this have to do with testing?

Testing is greatly informed by the way we interact with systems. We try out ideas and see if they will work based on what we think a system might do. While the specific skills learned from games do not transfer, our way of looking at situations and inspiration that may give us ideas do. Game play is scientifically proven to modify and change our cortical networks. It was fascinating to see the way in which Martin and the other testers on the team approached this as a testing challenge.

The test subjects had their brains scanned while they were playing games, and the results showed that gaming had an actual impact on the gaming brain. Those who played games frequently showed cooler areas of the brain than those who were not playing games. This shows that gaming optimizes neural networks in many cases. Martin also tok this process focusing on a more specific game, i.e. Mastermind, and what that game did to his brain.

So are games good for testers? Jury is out, but the small sample set certainly seems to indicate that yes, there looks to be evidence that games do indeed help testers and that the culture of tester games, and other games, is indeed healthy. Hmmm, I wonder what my brain looks like on Silent Hill... wait, forget I said that, maybe I really don't want to know ;).

---

A great first day, so much fun, so much learning, and now it's time to schmooze and have a good time with the participants. See you all tomorrow!

Saturday, August 9, 2014

From Mid-Town Manhattan, it's #TestRetreatNYC


In the offices of LiquidNet, a group of intrepid, ambitious testers met and decided to discuss what aspects of testing mattered to us. Test Retreat is an Open Space conference, where the sessions begin when the participants want to have it begin (and end), the people who are there are the people who need to be there, and the law of two feet rules. We took some time to discuss topics, pull together similar topics and threads, and then head into our places to talk about the stuff that is burning us up inside.

---

The first session I attended was based around developing a testing community within a city or region. As many of us were part of different communities around the world, there were a variety of experiences, ranging from very small markets with perhaps a hundred testers total, to large regions with hundreds of thousands of testers, but little in the way of community engagement. 

Rich Robinson led the discussion and shared his own experiences with growing and scaling the community in Sydney, Australia. Rich shared that there were three areas that were consistently asked for and looked at as the goals of the attendees: looking for work, finding out the latest trends, and honing and sharpening skills. Rather than try to be all things to all people, they developed a committee and separate initiatives, with committee members focusing on the initiatives that mattered to them. For the group that wanted to get work, they made an initiative called “opportunity seekers”, and focused energy on those who had that as their biggest objective. For those who want to focus on latest trends, they have an avenue for that as well. 

Different regions have unique challenges. In the Bay Area, we have an overabundance of meetups for technical topics, of which a handful of them relate to software testing. As a founder of Bay Area Software Testers (BAST), Curtis Stuehrenberger, Josh Meier and I have chosen to try to focus on areas that are not as often discussed (we tend to steer clear of automation tools and techniques, since there are dozens of other meetup groups that cover those topics). Another challenge is frequency. In some cases, regular and frequent meetings are key. In others, having them less frequently works, but the key is that they meet regularly (monthly, every six weeks, or quarterly seem to be the most common models).

Rich also shared that their initiatives branch away from the formal meetup sessions, and have other opportunities that they initiate that occur outside of the formal meetup times. By having each initiative have people committed to it and resources to help drive those initiatives, buzz gets generated and more people get involved. One of the key things that Rich emphasized was getting people involved and engaged for these initiatives. The Sydney tester group has a committee of ten members that helps make sure that these initiatives are staffed and supported.

Another challenge is the local regions. Some cities have sprawl, others are difficult to get to in a timely manner due to traffic and population density. For example, the San Francisco Bay Area has four general regions: San Francisco (and the upper San Francisco Peninsula, Silicon Valley (and the lower San Francisco Peninsula), the East Bay, and the North Bay. With a few exceptions, people who participate in one generally do not regularly participate in the others. To reach out to a broader community in these sub-regions, it may require using technology and remote-access options for people to participate. 

Ultimately, growing a community takes time, it takes dedicated people, it takes a range of topics that matter to the attendees (including making sure that food and drink is there ;) ). To get those people you want to be involved, it helps to be very specific about what is needed. Saying “I need help with this” is less effective than saying “I need this specific thing to be done at this time for this purpose”. Specificity helps a lot when recruiting helpers.


The next session was “Sleep No More”, presented by Claire Moss, and focused on the model of the performance/play called Sleep No More (which is, in some ways, described as “an immersive performance of Macbeth”). It’s a darkened environment, all participants wear masks, no photography, no talking, just experience as the person sees it. Exploratory Testing, in many ways, has similarities to this particular experience. Claire used a number of cards to help display the ideas, and one of the first ideas she shared was “fortune favors the bold”. Curiosity and a willingness to go in without fear and deal with a substantial amount of “vague” is a huge plus. If you already have that, you have a strong advantage. If this is not natural for you, it can be developed.

Each room in the Sleep No More experience was part of the performance, and at any time, rooms could be empty or have people filter in during the performance. There are “minders” in the event that help to make sure that people don’t completely lose track of where they are. At times, there are very personal experiences that take place based on your tracks and where you go. Claire described a very intense experience of the performance based on where she went and what she observed and chose to follow up on. She also said that, up to this point, no one that she knew had anything like the experience she had.

The experience of Sleep No More was bizarre, creepy, full of strange triggers, and the potential to go into wildly unexpected direction. Software testing in many ways mirrors this experience. While there may be familiar areas and ideas, very often, our choices and angles may take us into very unexpected places. To give an example of the scope of the space, this was in a six story building, in an area that used to be a hotel (and whole areas of the building were gutted, in some spaces multiple floors were open to the air and visible). 

Claire described a feeling of “amazement fatigue”, where the level of stimulus is so high that there is no way to take it all in. The participants have to make conscious choices as to where they will go, and many of the participants will have wildly different experiences. Sometimes, they would follow a character, only to watch them go through a door and close and lock it, so that they couldn’t be followed any longer. This reminds me of following threads of a feature, and being brought to a dead end. People will observe different things, and they will also observe what other people do, and what they focus on. This can give us clues as to areas we want to explore next. 

This experience sounds amazing, and I am definitely interested in going and doing it myself, if time and commitments permit me to do so. Looks like I will be attending the August 10 performance :).


The next session was based on “Leadership”, and Natalie Bennett led the session with the idea that she wanted to see where individuals felt their experiences or needs for leadership were, as opposed to her telling us what she felt about Leadership and how to do it. Questions that Natalie wanted to discuss were:

- What is the purpose of a test team lead?
- What is it for?
What makes it different than being a test manager?

The discussion shifted from there into ways that test team leads and test managers were similar and where they differed. Some of the participants talked about how they led by example, and that they divvied up the work among the group based on the people involved and what they were expected to do. Team leads in general do not have hiring/firing authority, and they typically do not write reviews or have salary decision input. In other environments, the team manager and team lead are one in the same. There are some who are cynical about the effectiveness of this arrangement, while some feel that it is possible to be both a team lead and a team manager.   One attendee who is a Director of Q.A. for her company said that she was “the face of Q.A.” to the organization, and as such, she was setting the direction and expectations for the organization, as well as for her own direct reports.

Team leads are expected to teach and coach the members of their group, as well as be the point of contact for the group. It’s seen as important that they be able to focus on and develop their own role and make it responsive to their own environment. The team lead stands up for the group, and defends them from encroachment of issues and initiatives that are counter-productive to their success. Responsibility and authority tends to be on a sliding scale. Different companies allow for a different level of authority for the leads. Some give a level of authority that is just short of being an actual manager. In others, the leads is considered a “first contact” among equals.

One of the bigger challenges is to deal effectively when team members are failing. Failing in and of itself is not bad. It’s important to learn, and failing is how you learn, but when the failing is chronic or insurmountable, there needs to be a different level of interaction. Lean Coffee, direct mentoring, or even a serious re-consideration of experiences and goals can be hugely beneficial, both for the individual and the team as a whole.


Matt Heusser led a session about “Teaching Testing”, and some of the challenges that we face when we teach software testing to others. When we have an engaged and focused person, this usually isn’t a problem. When the person in question isn’t engaged, or is just going through the motions, then it’s a little more difficult.  The question we focused on at first was “what methods of teaching have worked for you?” Testing is a tactile experience, rather than looking at an abstract questions. We are familiar with questions like “how do you test a stapler?” or “how can you test a Rubik’s Cube?”. The presentation of this challenge may be the most important aspect. For some, they might look at “how do you test a stapler?” as demeaning. They are professionals, what is this going to teach me?

In my experience, one of the things I found to be helpful is to actually spell out how challenging the exercise could be. Rather than ask “How do you test a stapler?”, I might instead say “Tell me the 120 ways that you can test a stapler to confirm it’s fit for use?” This sets a very different expectation. Instead of saying “oh, this is trivial”, by seeding a high number, they may want to try to see how they might be able to meet or exceed that number. They become engaged.

To borrow a bit from Seth Godin, there are two primary goals for everyone. It’s the important aspects that we need to learn, regardless of the discipline. The first is to focus on authentic problems. The second is to be able to lead. Domain knowledge is a huge factor in helping to identify authentic problems. It’s not the only means, but getting to really know the domain can help inform the testing ability. Another important aspect is to understand how people learn, Everyone goes about learning a bit differently. Helping each person learn how they learn can be a huge step in helping to teach them. Sometimes the most ripe area of learning is to wade into an area where people disagree, or where there might be a number of people or groups where there might be dysfunction, where team members don’t talk to each other, or there’s simmering hostility between people. If there’s hostility between two programmers, and they write software that interacts with each other, it’s a good bet that there might be a goldmine of issues between their interaction points (I think this is a very interesting idea, btw :) ) .

Key to teaching testing is the ability to reflect and confirm what has been taught and learned, and for me, I think that Weekend Testing does this very well. The benefit of Weekend Testing, beyond just doing the exercise, is that we can see lightbulbs turning on, and there’s a record of it that others can see and learn from. Creating HowTo’s can also be a helpful mechanism for this. 


This section is the talk that Smita Mishra and I gave about “Hiring Testers and Keeping them Engaged Once We’ve Hired Them”. I recorded this session, and I will transcribe it later ;).


Claire Moss led a session on “Communicating to Management” and we went through and considered a list of questions that are important to frame the conversation(s):

What does quality like to our organization?
Why spend money on testing?
What does testing do?
What value are we getting out of testing?
I read this about QA, and it says we should do this… why aren’t we doing this?” 

These are all questions that we need to be prepared to answer. The question is, how do we do that?

There are several methods we can use, but first and foremost, we need to determine what we need to speak with management about, and if possible, use the opportunities to help educate them about what it is we can do, and at the same time, get a clear understanding about what their view of the world is.

Looking to standards and practices that are helpful can give us guidance, but it doesn’t always represent our reality. Information needs to be specific to explaining where we stand at the given time. Testing is primarily focused on giving quality information to the executives so that they can make qualified decisions. That is first and foremost our mission. Information that we can effectively provide is: 

- Framing of the ecosystem on a global scale (browser standards, trends, data usage histories)
- Impact on customers (client feedback, analytics data)
- Clarify issues and questions (heading off the executive freakout)
- Managing expectations (especially when dealing with something new)
- Explaining how likely issues brought to their attention really are problems worth investing in
- Explaining risk factors and methods to mitigate those risks

—-

At the end of the day we had a lot of new idea, feedback for some new initiatives, an emphasis on better communication, more focused due diligence, and the fact that so many participants had a lot they felt they could contribute. This was a fun and active day, and a lot of learning and connecting. One of the key things I am always impressed about when it comes to these events is that we really have a lot of solid people in the testing community, but we need even more.

I encourage every tester that admires craftsmanship, skill, and thinking make it a point to come to these now annual events (this is the third of these, so I think it’s safe to say that it’s a thing now ;) ). Once again, thanks Matt (Heusser) and Matt (Barcomb) for organizing what has becoming my favorite Open Space event. May there be many more.


Friday, August 8, 2014

Silence Can Be Powerful

This past Tuesday evening, the Boy Scout troop that I am Scoutmaster for (Troop 250 in San Bruno), held its big annual Homecoming Court of Honor. This is typically a big affair, in that it encompasses our Scout Camp week and all the awards earned while there (which is to say, a lot of them).

One of the things I have been trying to do with the boys in my troop is encourage them to take on challenges and take chances. Ultimately, a well run Troop is run by the boys, not by the adult leaders. Still, it's very common for them to ask me a lot of questions about what they should do or shouldn't do, and normally, I'm ready and willing to provide them answers.

This time, though, I decided to do something unprecedented, at least as far as a Court of Honor was concerned. The scouts are familiar with what we call a "silent" campout. At a silent campout, the adult leaders camp in a site adjacent to the one the scouts are camping in, and we follow all rules of safety and emergency preparedness, but other than that, we stay in our camp site, and they stay in theirs. They set up, cook, clean, make and break down fires, and anything else they need, all without any input from the adult leaders. The goal is to have them learn from their own mistakes, and to have them work with each other to solve their problems rather than have the adult leaders do it for them.

I decided to take this one step further and declared that our Court of Honor would be a "silent" Court of Honor. In other words, the scouts would run it, they would do all of the specific details (give out awards, recognize rank advancements, etc.) and I and the other adult leaders would sit back and watch. We would not speak, we would not direct, we would not answer questions.

So how did it work out? Splendidly!

Granted, there were several times where I had to sit on my hands and keep my mouth shut when I so wanted to say "no, not that way, do it like this!" that, however, was not the point. It wasn't my Court of Honor, it was their Court of Honor. If they neglected to bring something out, put something on display, or do something that they had seen me do dozens of times, that didn't matter. What I wanted to see from them was what they felt was important. I wanted to se what aspects of a Court of Honor they wanted to do. They jumped at the chance to do a skit. they liked doing silly one liners. They enjoyed the awards part, and they left me a little time at the end to speak my peace. Which I did, but only at the end.

Many times, I think we do a disservice to those who are learning and trying to figure out what's important and what isn't. Coyote Teaching focuses on leveraging the environment and addressing real needs, as well as focusing on the art of questioning, or asking more questions rather than giving direct answers. I'd like to add to that the very real teaching tool of "be quiet". Sometimes it's best to not answer, or to remove ourselves entirely. Sure, there may be stumbling, there may be things said that are not perfect, or there may be some key stuff that gets forgotten, but that's OK.

What's important is to give the people you are working with a chance to discover what is important to them, and let them reach that conclusion themselves. It would have been very efficient to correct them and tell them what to do, but it would be far less effective than giving them the chance to run with the program all on their own. They've participated in several Courts of Honor over the years that I have run, and regardless of how flawlessly I may have done them in the past, none of them will be as memorable, or mean as much, as this one will. The reason? BY giving them silence, they got to experience and do for themselves what they wanted to do, and honor the troop in the way that actually mattered to them.

I'm super proud of all of them... and yes, I took notes ;).

Thursday, August 7, 2014

Stepping Back, Taking a Breath, Letting Go, and Saying "NO"

Many will, no doubt, notice that my contributions to this blog have been spotty the past few months. There's a very specific reason.

A few months back, I did an experiment. I decided to sit down and really see how long it took me to do certain things. I've been reading a lot lately about the myth of multi-tasking (as in, we humans cannot really do it, no matter what we may think to the contrary). I'd been noticing that a lot of my email conversations started to have a familiar theme to them: "yeah, I know I said I'd do that, and I'm sorry I'm behind, but I'll get to that right away".

Honestly, I meant that each and every time I wrote it, but I realized that I had done something I am far too prone to do. I too frequently say "yes" to things that sound like fun, sound like an adventure, or otherwise would interest and engage me. In the boundless optimism of the moment, I say "sure" to those opportunities, knowing in the back of my mind there's going to be a time cost, but it's really fuzzy, and I couldn't quantify it in a meaningful enough way to guard myself.

I decided I needed to do something specific. I purchased a 365 day calendar (the kind with tear off pages for each day), and I took all of the dates from January through May (at the time I did this, that was the current date). I wrote down, on each sheet, something I said or promised someone I would do. Some of them were trivial, some were more involved, some were big ticket items like researching an entire series of blog posts or working through a full course of study for a programming language. As I started jotting them down, I realized that each time I wrote one down, another one popped into my head, and I dutifully wrote that one down too, and another, and another, until I had mostly used up the sheets of paper.

WOW!!!

I came to the conclusion that I would have to do some drastic time management to actually get through all of these, and part of that was to find out where I actually spent my time and how much time it took to actually complete these tasks. I also told myself that, until I got through a bunch of these, I was going to curtail my blog writing until they were done. I've often used my blog as a "healthy procrastination", but I decided that, unless I was discussing something time sensitive or I was at an event, the blog would have to take a back seat. That's the long and short of why I have written so little these past three months.

In addition, I came to a realization that matched a lot of what I had been reading about multi-tasking and effectively transitioning from one task to another. For every two tasks I tried to accomplish at the same time, I came to see that the turnaround time to getting them done, compared to doing them independently, was four hours above and beyond what it would take to do those tasks individually. That was at the absolute best case scenario, with me firing on all cylinders, and me in "hot mode" brain-wise. As I've said in the past, to borrow from James Bach, my brain is not like a well oiled machine. Instead, it's very much like an unruly tiger. I can have all the desire in the world, and all the incentives to want to get something done, but unless "the tiger" was in the mood, it just wasn't going to be a product I, or anyone else, would be happy with.

The areas and stimuli that had the best effect was an absolute drop dead date, and another person in need of what I was doing to make it happen. Even then, I found myself delivering so close to the drop dead date that it was making both myself and the people I was collaborating with anxious.

Frankly, that's just no way to live!

Next week is CAST. I am excited about the talk I am delivering. It's about mentoring, and using a method called Coyote Teaching, along with the rich (but often expensive) nature in which it allows for not just transfer of skills, but also truly effective understanding. In this process of writing and working on this talk with my co-presenter, Harrison Lovell, I decided to use it on me, a little bit of "Physician, heal thyself". I came to realize that my expectational debt was growing out of control again. In the effort to try to please everyone, I was pleasing no one, least of all myself. Additionally, I have been looking at what the next year or so will be shaping up to look like, where my time and energy is going to be needed, and I came to the stark realization that I really had to cut back my time and attention for a variety of things that, while they sounded great on the surface, were just going to take up too much time for me to be effective.

I've already conversed with several people and started the process of tying up and winding down some things. I want to be good to my word, but I have to be clear as to what I can really do and what time I actually have to do those things. Time and attention are finite. We really cannot make or delay time. No one has yet to make the magic device from "The Girl, The Gold Watch and Everything", and time travel is not yet possible. That means that all I can do is use the precious 24 hours I get granted each day to meet the objectives that really matter. That means I really and honestly have to exercise the muscles that control the answer "NO" much more often than I am comfortable with doing. I have to remind myself that I would rather do fewer things really well than a lot of things mediocre or poorly.

I am appreciative of those who have willingly and understandingly helped by stepping in and taking over areas that I needed to step back from. Others will follow, to be certain. For the most part, though, people are actually OK with it when you say "NO". It's far better than saying "YES" and having that yes disappear into a black hole of time, needing consistent prodding and poking to bring it back to the surface.

I still have some things to deliver, and once they are delivered, I'm going to tie off the loose ends and move on where I can, hand off what I must, and focus on the areas that are the most important (of which I realize, that list can change daily). Here's looking to a little less cluttered, but hopefully more focused and effective few months ahead, and what I hope is also a more regular blog posting schedule ;).