Showing posts with label CAST. Show all posts
Showing posts with label CAST. Show all posts

Monday, August 25, 2025

Building Your Tester Survival Guide with Dawn Haynes: A CAST Live Blog

For the past couple of days as we have been getting CAST ready to go, I've gone and done a number of item runs, food stops, and logistical troubleshootings with Dawn Haynes, which is a common occurrence over my years with CAST. Dawn and I have frequently been elbows deep in dealing with the realities of these conferences. One funny thing that we quipped about was the fact that any time we appear at conferences together as speakers, somehow we are always scheduled at the same time (or at least a lot of the time). I thought that was going to be the case this time as well but NO, the schedule has allowed us to not overlap... for ONCE :)!!!

I first learned about Dawn through her training initiatives long before I was actually a conference attendee or speaker. She appeared as a training course provider in "Software Test and Performance" magazine back in the mid 2000s. Point being, Dawn has been an expert in our field for quite some time, and thus ,if Dawn is presenting on a topic, it's a pretty good bet it's worth your time to sit and listen. Daw is the CEO and resident Testing Yogini at PerfTestPlus, so if you want to get a first hand experience with her, I suggest doing it if you can. For now, you get me... try to contain your excitement ;).

Onekey area tht Dawn and I are both aligned on and wholeheartedly agree with is that we are individually as testers, quality professionals, whatever we call ourselves, we are responsible for crating our own careers and if you have been in testing for an extended period, you have probably already had to reinvent yourself at least once or twice. Dawn wants to encourage all testers and quality professionals to actively develop their survival instincts. Does that sound dire. It should... and it shouldn't. Dawn's point is that testing is a flexible field and what is required one day may be old hat and not needed the next. As testers, we are often required to take on different roles and aspects. During my career, I have actually transitioned a few times into doing technical support over active day to day testing.  That's a key part of my active career curation. I've actually been hired as a tech support engineer only for them to realize that I have had a long career in software testing and the next thing I know, I'm back and actively doing software testing full time. In some cases, I have done both simultaneously and that has kept me very busy. My point is, those are examples of ways that testing skills can be applied in many different ways and with many different jobs. 

Automating stuff, doing DevOps, running performance or security audits, or looking at areas your organization may not be actively working towards and playing around with those areas. As you learn more and bring more to the table, don't be surprised that you may be asked to do more of it or leverage those skills to learn about other areas.

Some areas are just not going to be a lot of fun all of the time. Sometimes you will take a while to get the skills you need. You may or may not get the time to do and learn these things but even if you can just spend 20 minutes a day, those efforts add up. Yes, you will be slow, unsure, and wary at first. You may completely suck at the thing that you want to/need to learn. You may have deficiencies in the areas that you need to skill up on. The good news is tat's normal. Everyone goes through this. Even seasoned developers don't know every language or every aspect of the languages they work with. If you are not learning regularly, you will lose ground. I like Dawn's suggestion of a 33/33/33 aproach. Learn something for work, reach out to people, train and take care of yourself. By leveraging these three areas, we can be effective over time and have the healeth and stamina to actually leverage what we are learning. We run the risk of burning ourselves out if we put too much emphasis on one area, so take the time to balance those areas and also, allow yourself to absorb your learning. It may take significant time to get good at something but if you allow yourself the time (not to excess) to absorb what you are learning, odds are you will be better positioned to maintain and even grow those skills.

One of the best skills to develop is to be collaborative whenever possible. Being a tester is great but being able to help get the work done in whatever capacity we can is usually appreciated. A favorite phrase on my end is, "There seems to be a problem here... how can I help?" Honestly, I've never to date been turned down when I've aproached my teams with that attitude.

Glad to have the chance to hear Dawn for a change. Well done. I'm next :).   



We're Back: CAST is in Session: Opening Keynote on Responsible AI (Return of the Live Blog)

 Hello everyone. It has been quitre a while since I've been here (this feels like boilerplate at this point but yes, it feels like conferences and conference sessions are what get me to post most of the time now, so here I am :) ).

I'm at CAST. It has been many years since I've been here. Lots of reasons for that but suffice it to say I ws asl=ked to participate, I accepted, and now I am at the Zion's Bankcorp Tech Center in Midvale, UT (a suburb/neighborhood of Salt Lake City). I'm doing a few things this go around:

- I'm giving a talk about Accessibility and Inclusive Design (Monday, Aug. 25, 2025)

- I'm participating in a book signing for "Software Testing Strategies" (Monday, Aug. 25, 2025)

- I'm delivering a workshop on Accessibility and Inclusive Design (Wednesday, Aug. 27, 2025)

In addition to all of that, I'm donning a Red Shirt and acting as a facilitator/moderator for several sessions, so my standard Live Blog/post every session will by necessity be fewer this go around as I physically will not be able to do that this go around. Nevertheless, I shall do the best I can.


The opening keynote is being delivrered by Olivia Gambelin and she is speaking on "Elevating the Human in the Equation: Responsible Quality Testing in the Age of AI"

Olivia describers herself as an "AI Ethiscist" and she is the author of "Responsible AI". This of course brings us back to a large set of questions and quandaries. For a number of people, we may think of AI in the scope of LLM's like ChatGPT or Claude and many people may be thinking, "What's the big deal? It's just like Google only the next step." While that may be a common sentiment, that's not the full story. AI is creating a much larger load on our power infrastructure. Huge datacenters are being built out that are making tremendous demands on power, water consumption, and on polluion/emissions. It's argued that the growth of AI will effectively consume more of our power grid resources than if we were to entirely convert everyone over to electric vehicles. Thus, we have questions that we need to ask that go beyond just the fact that we are interacting with data and digital representations of information. 

The common refrain of "just because we can do something doesn't necessarily mean that we should". While that is a wonderful sentiment, we have to accept the fact that that ship has sailed. AI is here, it is present, in both trivial and non trivial uses, and all of the footprint issues that that entails. All of us will have to wrestle with what AI means to us, how we use it, and how we might be able to use it responsibly. Note, I am thus far talking about a specific aspect of environmental degradation. I'm not even getting into the ethical concerns when it comes to how we actually look at and represent data. 

AI is often treated as a silver bullet and something that can help us get answers for areas and situations we've perhaps mnot previously considered. One of the bigger questions/challenges is how we get to that information, and who/what is influencing it. AI can be biased based on the data sets that it is provided. Give it a limited amount of data, it will give a limited set of results based on the information it has or how that information was introduced/presented. AI as it exists today is not really "Intelligent". It is excellent pattern recognition and potential predictive text presentation. It's also good at repurposing things that it already knows about. Do you want to keep a newsletter fresh with information you present regularly? AI can do that all day long. We can argue the value add of such an endeavor but I can appreciate for those who have to pump out lots of data on a regukar basis, this is absolutely a game changer.

There are of course a number of areas that are significantly more sophisticated and data that is much more pressing. Medical imaging and interpreting the details provided is something that machines can crunch in a way that a group of humans will take a lot of time to do with their eyes and ears. Still, lots of issues can still come to bear because of these systems. For those not familiar with the "Texas Sharpshooter Fallacy", it's basically the idea of someone shooting a lot of shots into the side of a barn over time. If we draw a circle around the largest cluster of bullets, we can infer that whoever shot those shots was a good marksman. True? Maybe not. We don't know how long it took to shoot those bullets, how many shots are outside of the circle, the ratio of bullets inside vs. outside of the circle, etc. In other words, we could be making assumptions based on how we are grouping something that a bias and prejudice is leaning on. Having people look at these can help us counter those biases but it can also introduce new ones based on the people that have been asked to review the data. To borrow an old quote that I am paraphrasing because I don't remember who said it originally, "We do not see the world for what it is, we see it for who we are".  AI doesn't counteract that tendency, it amplifies it, especially if we are spcifically looking for answers that we want to see. 

Olivia is arguing, convincingly, that AI has great potential but also has significant liabilities. It is an exciting aspect of technology but it is also difficult to pin down as to what it actually provides. Additionally, based on its pattern matching capabilities, AI can be wrong... a lot... but as a friend of mine os fon of saying, "The danger of AI is not that it is often wrong, it's that it is so confidently wrong". It can lull one into a false sense of authority or reality of a situation. Things can seem very plausible and sensible based on our own experiences but the data we are getting can be based on thin air and hallucinations. If those hallucinations scratch a particular itch of ours, we are more inclined to accept the findings/predictions that match our world view. More to the point, we can put our finger on the scale, whether we mean to or not, to influence the answers we get. Responsible AI would make efforts to help combat these tendencies, to help us not just get thr answers that we want to have but help us challenge and refute the answers we are receiving.

From a quality perpective, we need to have a direct conversation as to what/why we would be using AI in the first place. Is AI a decent answer to looking at writing code in ways we might not be 100% familiar? Sure. It can introduce aspects of code that we might not be super familiar with. That's a plus and it's a danger. I can question and check for quality of noutput for areas I know about or have solid familiarity. I am less likely to question areas I am lacking knowledge in or actually look to disprove or challenge the findings. 

For further thoughts and diving deeper on these ideas, I plan to check out "Responsible AI: Implement an Ethical Approach in Your Organization" (Kogan Page Publishing). Maybe y'all should too :).


Monday, August 14, 2017

Getting Ready to CAST a Wide Net in Nashville

It’s time to get back in the saddle and start doing some live blogging and somewhat real-time podcasting. Now that CAST is this week in Nashville, I’m doing my last bit of planning and logistics to get me to Nashville and get ready for CAST. For the past couple of months, Claire Moss and I have done some teaser podcasts for the conference (dubbed the AST CASTcast). If you haven’t been following along, here’s the whole series we've produced thus far (we have 18 episodes you can listen to :) ):


I will be Live Blogging this conference, but in a change from the past, I will not be doing it here on Testhead. AST has asked me to do my Live Blogging directly with the conference materials, so I encourage everyone to follow along over at the AST site.

With that in mind, I’d like to do an experiment. There are many sessions I would like to attend, but since I’m live blogging the event, I thought I would throw out to the readers of Testhead… what talks would you like to have me attend? What topics interest you? What do you think I might benefit from tuning into that may not currently be on my radar?

Also, I will be packing my microphone, so if you would like to have us record a podcast or two while we are there, let me know who you would like to interview and we can see if they would be game to participate.

It’s going to be an active week. I hope you all will follow along with me, and yes, I will also post some here as well :).

Monday, May 1, 2017

A New "Limited Time" Project: The AST CASTCast

For several years, I've been actively engaged in making and producing podcasts. There's a certain amount of commitment and resources needed to make a podcast work, and there are ways to share and communicate it that are considered fairly standard. In short, if your podcast isn't in iTunes (or more correctly now, Apple podcasts) then it doesn't exist. That's not entirely true, but it does make podcast discovery a little more challenging.

Still, there are periods where you want to do something that's a little more focused, to get information out to a target group quickly, and that may not require having a long shelf life. I've started an initiative that does exactly that, and in the process, we made some choices that I think you all may find interesting. At the very least, they may be a cautionary tale or a model that you may wish to try yourselves.



Claire Moss and I are recording a series of short, targeted podcasts that are meant to help focus on the upcoming AST CAST Conference in Nashville, Tennessee this year. This podcast is called The AST CASTcast, and in many ways, it's like many other podcasts, and in a few ways, it's different. As a way to quickly get content out to people, rather than produce a podcast that would be pushed to podcast syndication systems, we decided to go with an approach that used YouTube. YouTube is an interesting platform choice, in that it's ubiquitous, and it's easy to access and upload to. It does have a few challenges, in that it's not an audio-only medium. Every file has to be a video file. If we were to record a show with video, then that would be easy, but we are not able to guarantee all of our guests can record with video, and rather than leave them out, we decided to make an audio podcast.


That's great, you may say, but YouTube is a video platform. How do you make an audio-only recording available as a video file? We opted to use a service called TunesToTube. The service lets you upload an audio file, and then an image file (in our case, we chose to make a show card with show info and pictures of participants). These then get combined into a video file and uploaded to YouTube. If you use the free version, you will have a watermark appear that says the TunesToTube service was used to encode the video.

The process is pretty straightforward:


  • Go to the TunesToTube.com website
  • Log into your YouTube account
  • Upload the audio file you wish to use for your podcast.
  • Upload the picture(s) you would like to use to represent your video (you can also type in text and make some selections so that you have a video title, etc. if you don't want to load an image)
  • Put in the title, description, and tags as you want them to appear.
  • Upload the file to YouTube

As stated, this would be a great way to give a test drive to a podcast or to make it available as an alternative to a regularly published podcast feed. The advantages are that going from finished audio to available to listen is fast. Another plus to using YouTube is that it makes the content embeddable, as I'm doing in this post here. The source video is on YouTube, but anyone who wants to include it on a blog post or a share can do so easily, and it's immediately accessible via desktop, phone or tablet, as long as the device in question can play YouTube videos. The downside to this approach is that it does require the user to use YouTube and stream video files. Granted, they are not tremendously larger than the audio file, but there is an overhead, and thus, it deserves to be mentioned. Also, it's not as easily downloaded as a regular podcast, but there are methods to do that.

In any event, we hope you enjoy the AST CASTCast while we are posting them in the upcoming weeks and months. As always, I appreciate feedback and comments about what we are doing :).


Wednesday, August 5, 2015

Crossing the Finish Line - Reflections from #CAST2015

It's a little after midnight right now, on Thursday, August 6, 2015. The last talk was given six hours ago. Pizza, soft drinks and candy were brought in to complement the tester games and lightning talks. All during the evening, I shook hands with and got to meet many first timer attendees and thank them for putting their trust in us to put on a memorable conference for them. We look to have succeeded on that front.

I made it a point this time, as President, to sit a different tables and talk with people who were first timers at the conference. I wanted to understand what brought them here. Some came because of the reputation of CAST. Some came because co-workers recommended it. Some came because it was local to Western Michigan. Some came because at the last minute, the person who was supposed to come had to cancel and they were asked to go in their place. It was the last one I had a chance to hear about in the elevator as I headed upstairs after we finished everything. I asked her if she felt it was worth going in blind to an event she knew nothing about. She said "Yes, very much!"

This conference has been in the planning stages since this time last year. The board as a whole came out to Grand Rapids to check out the Amway Grand Plaza and scout out the surrounding neighborhood, and we felt we had a great venue to work with. The responses from the participants seems to confirm that fact. the conference committee handled logistics and contracts in as focused and timely a manner as could be hoped for. The Program committee did a wonderful job delivering a balanced program of excellent speakers, including a whole bunch of brand new speakers. We have been proud to sponsor new voices at CAST over the years, and especially this year, with the help of Speak Easy. Several speakers submitted papers to go with their talks and workshops, and those should be available soon.

I spent much of this conference in the role of a facilitator or facilitator's helper, and it helped keep me focused and alert to the questions and answers. Open Season means something quite different when you are the one managing the crowd and their expectations. Put simply, I would definitely do it again.

Each year the webCAST grows bigger and more people participate. I don't have hard numbers, but if Twitter is anything to go by, a lot of remote people liked what they saw and took to the Twitterverse to confirm it. Ben and Paul Yaroch, an excellent job once again. I can't wait to see the videos uploaded to YouTube.

We elected a new board for 2015-2016, and we released those who have chosen not to continue with the board going forward. as I said in an earlier post, I have bittersweet feelings about resuming my life as a civilian. On one hand, I feel four years is ample time. I think it's important that, for new ideas to take hold, those of us on the board don't run continually. We need to encourage other new voices to get involved and roll up their sleeves.  I may be back again to try to run in the future, but I feel it's time to step back and let others have a chance.

Overall, I want to say thank you to each and every one of the participants that carved out time during their week to come spend time with us. The conference is now finished, we have indeed crossed the finish line, and now it's on to the next adventure. That next adventure still has a few things that need to be ironed out before we can say anything, but I have a feeling that the audience and participants will love it ;).

That's it for me, I need sleep. Goodnight, everyone!!!

Exploring at Cloud Foundry - Live at #CAST2015

Jessie Alford told me something that astounded me... he's at Pivotal, working on Cloud Foundry... hey wait a minute, that means he's in my neck of the woods now! Mind you, that has nothing whatsoever to do with his talk which is "Driving Adoption of Chartered Exploratory Testing In An Agile Organization", but it made me smile because I  have talked with a n umber of people over the last couple of years about what Cloud Foundry does and how they approach testing... and now I can see it for real :).

Pivotal believes in exploratory testing, even if they don't have a dedicated test team. They have dedicated explorers that work in shifts along with their other development responsibilities. they create a variety of charters to help describe areas that might be risky.

I am impressed that a programming team would spent the time to develop charters for exploratory testing, but Pivotal has surprised me many times over the years (at my previous job I was a daily user of Pivotal Tracker). Jessie shared his backlog with listed charters, and it's cool to see how targeted they are.

Also what's cool is that everyone is encouraged to write charters, and each team develops their own norms as to how they are written. Each of the questions they ask are able to point back to the charters for targeted testing. As Jessie explained, charters are a general purpose scaffold. When Dev teams started looking at charters, they used them for testing, but they also started using them for lots of other things, like how to interact with other components, and use it to make sure that they are working effectively.

Another key aspect they learned was that pairing is not enough to communicate skills across an organization, but charters can allow teams to codify those abilities. Additionally, it's not enough to just teach exploratory skills, but to be able to confirm that those skills are transferred. In Pivotal, the term "test" doesn't get used very often in Pivotal, but "Explore" is an every day word. When you say test in a TDD, CI, CD environment, test has become so diffuse that it's lost its meaning, but exploring is easy to communicate, so it gets used. Automated checks are of course used and are important, but the ability to look at exploring makes clear the ways it is used in Pivotal, which is pretty cool.

Quick message to Natalie Bennett, Elisabeth Hendrickson and Dave Liebreich... Jessie done good, y'all should be proud of him. Jessie, if you find yourself down in the  Pivotal Palo Alto office, let me know so we can get together for lunch or something :).

Fomenting Change... In Space - Live at #CAST2015

We hear about organizations dealing with change for applications and hardware that are in data centers or in server farms in remote locations, but how about making changes to equipment that resides outside of Earth orbit... or on another planet entirely! Barbara Streiffert gets to deal with that reality as part of the Jet Propulsion Laboratory. She deals with testing of the Mars Rover, among other initiatives. Talk about a risk averse environment!

So much of this talk was specific to information shown on video. She showed us details about the Dawn Spacecraft and Ion Propulsion... wow, talk about a unique test experience! How cool would it be to work on systems that control Ion Engines! For those not familiar with what an Ion Thruster is, instead of heating the gas up, xenon is introduced into the gas chamber, and the ionization that results causes thrust (that's what I heard, I will not vouch that I got that right ;) ). Fortunately, this talk was part of webCAST, so everyone can see the videos and details.

The Dawn spacecraft flyover of Ceres is being shown, and my inner third grader is totally spazzing out right now :).

The Art and Science of Questioning - Live at #$CAST2015

It's fun being a facilitator, in that you get to actively participate in the discussion, and it also gets you up and moving. today I'll be spending most of my time facilitating the webCAST room, so if you see a bald guy in a white AST polo running around, yep, that is me :).

Jess Ingrasselino (@jess_ingrass) works with Bit.ly and worked as a music teacher prior to her tech career. I think that having that background is probably awesome for her being involved in programming and testing. She explained how she went from being a music teacher to quality assurance engineer, and that the processes are actually very similar. When a violinist or violist wants to start playing the Brandenburg Concerto, they don't start with the full piece. They start with the first note, or the first theme, and then they grow from there. There is a focus on building the knowledge and experience with performance and practice. Software testing does much the same thing; we start small with basic concepts, and we practice and apply them.

Testers ask questions, but not just any questions. They need to be targeted and specific, such that they may better understand and learn the product, and focus on other aspects of the product. I'm fond of the definition of testing that I use, which is "Ask the product a question, and based on the answer you receive, ask additional and more interesting questions."


Jess makes a case that borrowing from social science, and understanding the methods used in "qualitative research" makes it possible for us to ask compelling questions The methods Jess will use for this come from Robert Stakes. One of the areas to focus on is "multiple interpretation". What happens when different people with different permissions logged into the system? What is shared? What is unique to certain permissions?

One of the things that is vexing about asking questions is that we can fall into the trap of asking questions that are self-serving. We can easily craft questions that will provide answers that support our own beliefs or biases. We need to be aware of the unintentional ways that we word things or couch questions. Instead of asking "what issues are you having with development?", ask "can you describe for me your interactions with the functional teams in the organization?". Yeah, that seems like a squishy way of asking questions, but it allows the person being questioned to provide an answer free of initial or expected negativity. Instead of asking them to tell us what is wrong, we ask them what they do. In the process, they may more freely describe what is going wrong.

Another compelling answer to questions is silence. Yep. I've seen this many times, and silence means many things. It can mean "I'm thinking" or it can mean "I'm waiting to see if you will walk back that answer before I open my mouth." gauge the silence and don't be afraid to meet the silence with silence. It can often be used as a negotiating tactics. The marks are usually people who cannot get through the quiet and will start saying something, anything, to break the silence. Often, people who negotiate find that they settle for far less than they would if they were more focused on maintaining the silence.

Testers are, or should be, fundamentally curious. Even with that, asking questions is a skill, and it's one that can be developed. I see that studying up on qualitative research is in my future :).


The Future Is Here - Live at #CAST2015

It's Wednesday, and to be honest, the events of the past several days have become a blur. I've been in Grand Rapids since Friday, and I've been moving non-stop since I've been here. Test Retreat, conference setup, facilitator meetings, elections, logistics, rooms, venue preparations... it's easy to lose track of where we are and why we are here. I joked a few days back that I and the rest of the board and conference committee were busy doing all we could "to make all your wildest conference dreams come true". I'm not sure how we've delivered on that, but from the tweets I have seen, and the comments directed to me thus far, I think we're doing pretty good on that front :).

I was excited to see that Ajay Balamurugadas was chosen to be Wednesday's Keynote speaker. Ajay was one of the first software testers I had the pleasure to interact with when I chose to plug into the broader software testing community. Many testers were saying things and spouting ideas, but Ajay was rolling up his sleeves, doing stuff in real time, and sharing his results, both good and bad. Ajay introduced me to Weekend Testing, and then encouraged me to bring it to the USA. He stayed up late for his time to shadow me and offer suggestions for the first few sessions we did, and then he let me fly on my own. He has participated with many of our Weekend Testing sessions, including a session with flawk, which is a company my friend Matt Coalson has been building the past few years. Matt's literal words to me about the session was "Dude, that guy, Ajay? Wow, he's the real deal!" Ajay has put the time and the energy in to prove, time and again, that yes, he is indeed the real deal!

Ajay did something pretty bold for a keynote speaker. he put up a mind map of his talk and the details titled "Should I listen to Ajay?" In a nutshell, he says that he will be covering Learning opportunities, a trend in who tests, testing education, testing & other fields, Standards and schools, and his own thoughts. He then said he invited those with more important things to do to leave, and he would be totally OK with that. Notice I'm still typing ;). Right now, this is the most important thing I can be doing :).

Ajay starts with a quote from Aldous Huxley... "try to learn something about everything, and everything about something". In a nutshell, to borrow from Alan Page (and yes, others say it too, but Alan is famous for talking about this on the AB Testing Podcast) "be a generalizing specialist as well as a specializing generalist". Be T-Shaped, a jack of al trades who makes a priority of getting genuinely geeky with a few areas that you enjoy and feel are valuable. Don't just be a software tester, actually learn about software testing. Some ideas are going to be better than others, but dig in and try ideas out. Learn as much as you can, even if it's to decide what you will discard and what you will keep. Why do we so often welcome new testers with test cases? Do we not trust them to be able to test, or do we just insist on them doing what we tell them of front, with hopes that their creativity will appear later? If they are given prescriptive test cases, and told to execute them, don't be surprisedif that hoped for creativity does not appear.

there are several organizations taht exist to help teach software testers, some obvious and some less so. Ministry of testing, Weekend Testing, BBST Testing Courses, Udemy, Coursera, Test Insane's Mind Map collection, the Software Testing World Cup... there's *lots* of places we can learn and try out new ideas.

Ajay said something pretty cool (attribution missed, will fill in later).. "If you would like to double your income, triple your learning!" we each need to take the opportunities we have, and we need to apply them. I personally believe that my blog exists for this purpose. Sometimes I have let several days go fallow without writing because I feel I don't have anything unique to share. However, I have had such a rush these past few days writing summaries and interpretations of each of the sessions I've been involved in since Saturday. Before August, my largest number of blog posts for any given month was nine, and sometimes I felt like I struggled to get those out. Right now, I'm writing the eighteenth blog post for August, all of which inspired by my being here in Grand Rapids, with the activities I've been participating in. If all goes well, I may have four more to offer by the end of today. Seriously, that's twenty-two blog posts in five days! What's interesting is that, as I've written so many, I'm feeling energized, and I want to keep that energy going. That's the power of diving into your learning, and creating in the process. I want to see what it takes to keep it going.

Ajay has asked why you need a title to be a leader? The truth is, you don't. You can lead right now, and you can be an example and a guide to others. You do not need to ask permission, you just need to act with conviction and determination. Figure out the things you can do without having to ask permission, and dig in. If a process is slow, try another one. In parallel, if you must, or totally replace the old efforts with a new approach if you can do so. People may feel frustrated if you go and do something without asking, but they will likely keep what you are doing if you deliver a better result than what they were getting before.

What do you say when someone says "I'd like to become a software tester, what do I need to know?". Do we tell them the truth, that it can be exceptionally hard, and that there is so much to learn? Do we tell them that there's a lot of things they can get involved with? Do we encourage their curiosity, and get them engaged where they are? Personally, I think we can do a lot of good by starting with where people are and showing them the fun and experience software testing can be. Granted, it's not always fun, but there's plenty of opportunities to explore and be curious. De-emphasize the mechanics of testing, encourage the curiosity. Software testing classes are developing. I'm biased, but I'm pretty fond of the BBST courses and what they offer. Still, there's a need for more, and we have an opportunity to help make that happen. It will take time, of course, but there is a need for excellent software testing training. Let's do what we can to foster and develop it.

My thanks to Ajay and his devotion to our craft. He's a role model that I believe any software tester would do well to emulate, myself included. At this point I need to get ready for Open Season and help facilitate questions and answers. Thanks for playing along with me today, I'll be back in a bit :).


Early Morning Musings - Live from #CAST2015

For those not familiar with the concept of Lean Coffee, a group gathers together (coffee optional), proposes a set of topics, organizes the topics to see if there are synergies, votes on the topics to set the order of discussion, and then goes into discussion of each item for an initial five minutes. If there is still energy for the topic after the first five minutes, we can vote to continue the discussion or stop it and move on to another topic.

Today's attendees are/were:

Perze Ababa, Carol Brands, James Fogarty, Albert Gareev, Dwayne Green, Matt Heusser, Allen, Johnson, Michael Larsen, Jeff MacBane, Justin Rohrman, Carl Shaulis

Topics that made the stack:

Testing Eduction, What's Missing?

Thoughts thrown out by the group: Accessibility, Test Tooling, Mobile and Embedded, Emerging Technologies, Social, Local, Geographically Tied Applications, Testing Sttrategy

Of all of these ideas, the testing strategy seemed to get the most traction. Everyone seems to think they know what it is, but it's a struggle to articulate it. Regulatory compliance could be a relevant area. What about shadowing a company(s) and see what they are doing with their new testers, especially those who are just starting out? What are their needs? What do they want to learn? What do those companies want to have them learn? Consider it an anthropology experiment. Action was to encourage the attendees to see which companies would be game to be part of the study (anonymized).

How Can We Grow Software Testers Through our Local Meetup Groups?

Getting topics of interest is always a challenge. How do we focus on areas that are interesting and relevant without being too much the same as what they deal with a work. Albert has been hosting a Resume Club at his Toronto Testers Meetup, and that's been a successful focus. Gaps in experience can help drive those discussions. Lean Coffee format itself can be used in meetups, and the topics discussed can help develop new topics that appear to be of interest to the community. Encourage games and social interaction, we don't necessarily have to focus on talks and discussion. ST Grant program can also be used for this. We offer grant funds for meetup support, but we also have the option of flying in people to meetups to help facilitate events as well (Michael did this in Calgary back in 2012 for the POST peer conference).  If monthly meetup is too difficult, commit to quarterly and work to recruit speakers in the in between time. If the critical mass gets large enough, schedule more meetings. Have a round robin discussion from a talk that's already been presented and recorded. Make workshops based on topics of interest.

Writing Code For Testers Via Web Based Apps

Matt discussed this around a web app that aims to help teach people to program (as part of a Book to Teach People How to Code in Java). In a way, this is a two part problem. On one hand, there's the interface to engage and inform the user to get involved and learn how to code. The second is the meta-elements that can determine if the suer has completed the objective and can suggest what to do.  Both require testing, but both have different emphases (the ability for an application to determine if "code is correct" can be challenging, and there is always TiM TOWTDI to consider (There Is More Than One Way To Do It).

Good discussions, good ideas to work with, and so many more possible things we didn't even get to consider.  Lock and learn for me is that it might be cool to run an anthropology experiment with other companies to see what they want to have their testers learn.

Time for breakfast, see you all in a bit :).









Tuesday, August 4, 2015

Leaping into Context - Live from #CAST2015

Erik Brickarp gets the nod for my last session today. After facilitating three talks, it feels nice to just sit and listen :).

Erik's talk focuses on "A leap towards Context-Driven Testing". When chaos and discord start raining down on our efforts, sometimes the best break through comes with a break with.  In 2012, he joined a new team at a big multinational telecom company. That team had a big, clunky, old school system for both documentation and loads of test cases (probably based on ISO-9001, and oh do I remember those days :p ). What's worse, the teeam was expected to keep using these approaches. To Erik's credit, he decided to see if he could find a way out of that agreement.

The team decided they needed to look at the product differently. Rather than just focus on features  and functions, he also decided to look at ways that the project could be tested. In the process of trying to consider what the test approach had to be, they moved from multiple spreadsheets to web pages that could allow collaboration. By using colors in tables (as they used previously in cells) they were able to quickly communicate information by color and by comment (reminds me of Dhanesekhar's tutorial yesterday ;)).

By stepping away from iron-clad rules and instead focusing on guidelines, they were able to make their testing process work more efficiently. Of course, with changes and modifications, this welcomes criticism. The criticism was not based on the actual work, but they were upset that the junior team member went behind the back of the organization to "change the rules". Fortunately, due to the fact that the work was solid and the information being provided was effective, informative and actionable, they let them continue. In the following weeks, they managed to make the test teams deliverables slimmer and more meaningful, faster to create and easier to maintain. By using a wiki, they were able to make the information searchable, reports listable, and easy to find.

Erik admits that the approach he used was unprofessional, but he was fortunate in the fact that the effort was effective. As a lesson learned, he said that he could have approached this with better communication and could have made these changes without going behind their backs. Nevertheless, they did, and so they have a much more fun story to tell. The takeaway here is that there is a lot of things we can do to improve our test process that don't specifically require corporate sanction. It also shows that we can indeed make changes that could be dramatic and not introduce a ton of risk. Support is important, and making sure the team supports your efforts can help testers (or any team) make transitions, whether they be dramatic or somewhat less so.

Additionally, if you have a hope to change from one paradigm to another, it helps a great deal to understand what you are changing to and how you communicate those changes. Additionally, make sure you keep track of what you are doing. Keeping track doesn't mean having to adopt a heavy system, but you do have to keep track. Exploratory testing doesn't mean "random and do anything". It means finding new things, purposefully looking for new areas, and making a map of what you find. When in doubt, take notes. After all that, make sure to take some time to reflect. Think about what is most important, what is less important, and what I should be doing next. Changing the world is important, and if you feel the need to do so, you might want to take a page from Erik's book. I'll leave it to you to decide if it makes sense to do it in full stealth mode or with your company's approval. the latter is more professional, but the former might be a lot more fun ;).

Get Your Gandalf On For #a11y Testing - Live at #CAST2015

We spent the first part of the session framing the problem and encouraging the attendees to discuss the issues, now we are unleashing the hounds and letting them hear what a site sounds like as seen by a screen reader. To be brave, we submitted the AST web site to the #a11y treatment. Needless to say, we have some work to do to behave better with screen readers, but it made for a great way to discuss the challenges.

We have two angles we use to approach the acccessibility question. The first is to frame questions we can ask and principles we can discuss at the initial design phase. The ten principles I like to use come from Jeremy Sydik's book “Design Accessible Web Sites: Thirty-six Keys to Creating Content for All Audiences and Platforms (Pragmatic Publishing, 2007)":

  1. Avoid making assumptions about the the physical, mental, and sensory abilities of your users whenever possible.
  2. Your users’ technologies are capable of sending and receiving text. That’s about all you’ll ever be able to assume.
  3. Users’ time and technology belong to them, not to us. You should never take control of either without a really good reason.
  4. Provide good text alternatives for any non-text content.
  5. Use widely available technologies to reach your audience.
  6. Use clear language to communicate your message.
  7. Make your sites usable, searchable, and navigable.
  8. Design your content for semantic meaning and maintain separation between content and presentation.
  9. Progressively enhance your basic content by adding extra features. Allow it to degrade gracefully for users who can’t or don’t wish to use them.
  10. As you encounter new web technologies, apply these same principles when making them accessible.

From there, when we consider the design issues we want to discuss, if we want to be ready to discuss the requirements and do early testing on initial designs, we can use a heuristic called HUMBLE:


Humanize: be empathetic, understand the emotional components.  
Unlearn: step away from your default [device-specific] habits. Be able to switch into different habit modes.
Model: use personas that help you see, hear and feel the issues. Consider behaviors, pace, mental state and system state.
Build: knowledge, testing heuristics, core testing skills, testing infrastructure, credibility.
Learn: what are the barriers? How do users Perceive, Understand and Operate?
Experiment: put yourself into literal situations. Collaborate with designers and programmers, provide feedback


Both the design principles and the HUMBLE heuristic are useful early in the process of discussing Inclusive Design and overall Accessibility coding. What can we do when we are testing an existing feature, and we are evaluating it for Accessibility? We are working on a feature that is Dev Complete and has been delivered for testing. Do we have some suggestions for that situation? As a matter of fact, yes :).

Albert encourages testers to embrace "PaSaRaN". the word "pasaran" in Spanish means "They Shall Pass". If you say "no pasaran" you are effectively saying "you shall not pass"... and now my cheesy title should make a little sense ;).

PaSaRaN is a method of testing if a Page allows for Accessible Scanning, Accessible Reading
and Accessible Navigation.

It's entirely possible that you can go through your testing career and never get involved in an Accessibility project, but it's a very good bet that issues surrounding Accessibility will affect you at some point in your life, whether it be in a secondary or primary aspect. If you would like to evaluate products for Accessibility, PaSaRaN is a clever short-hand. If you find yourself getting into site design or web development, I'd like to encourage you to consider the ten principles and be HUMBLE in your designing and coding. Also, if you would like to talk about any of the stuff that we covered directly in the workshop, please feel free to contact either Albert or me and we'll be happy to answer any questions.

Finding Your #a11y in Accessibility - Live from #CAST2015

Albert Gareev and I have been working on the material for this presentation for a long time, so I am excited to be facilitating this workshop session. Albert is making his debut as a speaker at CAST, and more important, this conference has been the first time we have met each other in person. We've collaborated online for five years, so to finally get to work together on this material as relates to accessibility means a lot to both of us. The fact that we have a group of people interested in participating with us is icing on the cake :).

Accessibility is not just for people with "special needs". Accessibility comes into play for everyone at some point. In the presentation, we presented an example of a person with several "physical impairments" and asked the group to tell us what this person looked like. We had some interesting discussions, and then we revealed the trap, a picture of a young lady looking at her cell phone while walking across the street. After a few chuckles and an explanation, we introduced the idea of "secondary disability" or the fact that there are certain environments where a person has a special need, where in their regular environments they might not. Over time, if we are lucky to live long enough every one of us will deal with a special need that falls into the Accessibility realm. If we have impressed on the group that Accessibility is more than just for people with specific disabilities, I will consider that a huge success.

Albert and I both agree that putting a heuristic out there for people to use, while helpful, is much more potent when it is seen in action. A few months ago, I had the chance to give an accessibility talk at STP-CON in San Diego, and as part of my presentation, I asked the participants to fire up a screen reader on their systems and navigate to one of their favorite pages. As the screen reader started to do its job, the reactions were both amusing, and very telling. They "saw" how difficult it was for non-sighted users to "listen" to the information on the screen. They were drowned by the rapid fire speech that was trying to articulate what was on the page. this was one example and one condition. There are many other areas that accessibility covers as well (non-sighted, limited-sight, no-hearing, limited-hearing, limited movement, limited cognitive ability, etc.).

Accessibility is a difficult requirement to test for, in the sense that context has to be taken into consideration. Most of the time, we have to pick and choose which Accessible features we will use and who we will address. Is it enough to make a screen reader work with the product? Do we need to also had closed captioning for hearing-impaired? Do we have a way to make shortcuts for limited movement? Are we looking to address all of them at once, or are we willing to take them on one at a time.

We're taking a break now, so this seems like a good time to push this out. I'l come back with part two of our workshop in a bit.

Can I Lean on My Context at My Startup? - Live at #CAST2015

Eric Reis wrote an interesting book called "The Lean Startup" back in 2011, and it's become quite the buzzword du jour. Thomas Vaniotis has decided that he wants to get beyond the buzzwords and actually discuss Lean Startup and make the case that Lean Startup is all about testing.

Thomas recently made the shift from Technical Tester to Product Manager, working with a "Lean Startup" and he states categorically that he is doing as much testing as ever, if not more so. Thomas uses Eric Reis' definition of a startup, which is "a human institution designed to create a product or service within an environment with extreme uncertainty". "Lean" is the idea that we want to focus on the essential value, and remove the waste from the system wherever possible. Put the two concepts together, and you get "Lean Startup". This means that organizations that embrace this are going to look rather different from one another. It also means that what is considered value and what is considered waste will be different in each organization. Testers, this means that context matters a LOT!

Waste can vary. Overproduction is a real issue in a literal product factory, but it's also visible in software for features that are not used or have no value. Code needs to be maintained, tests need to be run, refactoring needs to take these empty features into account, and other features that may be more important are not made because the time is being spent working on stuff that is not relevant. Idle machines are a reality in factories, and backlogs in moving features forward is every bit as bad. I see this when we have a number of stories in DevComplete but with no testers with open cycles to work on them. What happens? They sit there until they can be picked up and addressed. Over time, this lag can be significant; getting a feature from PM proposal to shipped can take days, but some times it can take months or even years. In all, identifying waste takes some time and talent to recognize, and it can be a real struggle to eradicate it once it builds up.

There's waste in programming, there's waste in design, there's waste in testing,and there's waste in release. It just happens. the goal is not to eliminate it, but it is important to look to see where waste occurs and see what can be minimized. Testing is part of of the waste production and waste prevention culture. It's a solid part of what we do as testers, not just to find problems but to also find inefficiencies that we can work on. How do we do that? We can do that with Validated Learning, which means we subject failure to scrutiny, which in turn opens us up to failure. It's possible our investigation may disprove our hypothesis. Our experiment may show that we are wrong in our assumptions, but learning that also helps us expose and remove waste, even if the waste we remove is faulty assumptions of our own.

One of the ways we can help drive the learning is with a "Minimum Viable Product". This is an ideal method for applying what is called the "Build-Measure-Learn" loop. By building the smallest possible product, we can learn if our ideas our sound, if our product meets the need and if the data we receive supports the notion that we are on the right track. testers do this all the time, even with something that's not a MVP. New ideas are spun out from each cycle of this process. If this looks a lot like exploratory testing, indeed, it is :).

Thomas used a number of interesting examples (Zappos, Dropbox, etc.) and how they made their minimum viral product. For Dropbox, it was a video explaining why they had a solution for something most people hadn't figured out was even a problem. Additionally, Dropbox did it in a way that everyone could understand... it's a folder. That's it! For most people, that's all they need to know or deal with, and it's proven to be wildly successful. Food on the Table designed their system around initially a single customer. By working specifically with that one customer, they discovered what they needed to do to refine and create the system that would ultimately develop, and with each new family they added, they refined it even more, including automating a lot of the processes.

When we make MVP's, we need to measure the effectiveness of our efforts. In short, we need actionable metrics. What is the data telling us about our product, and are we accurately measuring something that is relevant? Does our vanity influence this choice of metrics? It certainly can. Case in point, I love seeing the hits on my blog each day. Yes, I pay attention. The problem is, hits alone tells me very little. It may mean people come to see my page, through some means, but does it mean they read the whole post? Does it mean they liked what they read? Does it mean they shared the link with someone else? Hit count won't tell me any of that, but it sure sounds good to say "hey, I received thousands of hits when I posted this". that's a vanity metric. It feels good, but it doesn't really tell me much, and it certainly doesn't guide me to action.

Once you have data that tells you that something isn't working, you need to change. The term for this is "pivot", and pivoting is much harder with large and bulky products. MVP's are easier to pivot with. Additionally, pivoting isn't just doing something different, but it's helping people see a pain point that they don't know about and you are ready to address. In this case, testing can help the organization not just confirm quality, but also see avenues to pivot into as well.

Lean Startups are often associated with Continuous Delivery. The reasoning is that, if we can deploy more frequently, releases themselves become much smaller and more manageable. When an organization gets to pushing multiple times in a day, releases can literally be a single story or a single bug fix. The turnaround time, instead of being counted in days or weeks, could be counted in hours, or even minutes. This approach doesn't minimize testing, it exposes it as even more relevant and necessary.

While MVP's are a starting point, the fact is, the product needs to mature and quality needs to continually improve. By using the Lean approach, that process becomes easier to manage, because the experiments are smaller and the turnaround time to getting a broken build fixed becomes easier with smaller incremental changes. Regardless, some experiments fail, and the line needs to be stopped at times. Be ready for that.

Time to put on my facilitator hat. Thanks, Thomas for a great talk, and now let's see what the audience has to say :).






Let's Move it Forward - Live from #CAST2015

Today the rubber meets the road. Day one of the full CAST 2015 conference is underway. We have had breakfast, we have introduced the program, and we have announced the election running. To that end, I want to remind all AST members that you have until 7:00 p.m. Eastern time TODAY to cast your vote for next year's Board of Directors.

Last year Karen Johnson and I had a discussion at CAST in New York where we commented on the fact that there was an "echo chamber" developing in the software testing world, and that it seemed that the voices we most needed to hear from we were not hearing from. She and I discussed the idea that the industry seems to value "rock stars", to which I laughed that those who use that term haven't known very many rock stars in real life (I have, and truth be told, they are not necessarily the most reliable people on the planet, but they are often fun to be around and listen to ;) ). Karen has been a solid voice in the testing world, and I was excited to see that she was the opening keynote for CAST 2015.

One of the great things about going to conferences for the first time is that the reaction we most often have is "Oh, wow, I'm not alone!" Getting that confirmation that first time can be huge, and it helps make it possible to frame our place in the world of software testing. Karen has been in the software testing world for 30 years, and like many of us, didn't have any intention of being a tester when she started out. She planned to be a journalist (which I think is really cool because I often look at software testing and journalism to be very kindred careers). Karen shared a lot of highlights from her career, and when flashed across the screen, made clear that she's had and continues to have a remarkable career! I recommend checking out the webCAST video of her keynote when it posts.

The theme of CAST 2015 is "Moving Testing Forward", and that indicates that in many ways, testing is seen as not moving forward. Software development has changed radically these past twenty five years (that's my time frame, since I started really thinking about it when I stated working in IT in 1991). Many of the development techniques have changed, but the way that software has been tested, at least in a number of organizations I have been in, has changed very little. It's easy for testers to feel "stuck" at various points, and when we try to make those forward steps, we often receive push back, and at times that pushback comes from our own colleagues. I had a similar experience in 2009, after nearly twenty years of software testing. I felt like I was doing the same things the same ways, and there was very little I felt I had to show for it beyond what I learned the first few years. Yes I had twenty years experience, but it felt like I had two years experience repeated ten times.

Stepping forward takes courage, it takes a willingness to know who you are and what you are good at. It also means you have to be ready to accept that there are things you are not good at, and often, that's the hardest part. However, it's important to realize that the things you are not good at can be improved, and the things you are good at can be boosted even more by focusing a bit on what you don't feel you are good at. Karen and I look to be on the same page here, and while we realize that there are so many things in the world we will never be amazing at, we can always improve our odds by working on the things we are good at. We can't do that exclusively, and yes, some things that are distasteful or uncomfortable come with the territory. Deal with them, but don't obsess on it.

Another valuable point comes in with who we work for. Karen recommends strongly to do all you can to not work for people you do not respect. If you work for someone you do not respect, your entire relationship will be off kilter. You will know it, and they will know it. When you don't respect who you work for, your best work rarely comes out. When you respect who you work for, it's not uncommon to walk through fire for those people. I've had a few of those experiences, most recently with my dearly departed Director of Quality Assurance, Ken Pier. I can truly say I would walk through fire for him, and I strive today to be worthy of the respect he had for me as well.

There will be office politics. Do not believe you can escape it. You can't. It's part of the culture, and to borrow a recent quote from @DocOnDev... "your office does not have a culture, your office IS a culture". Cultures are dynamic, they are lived, and they are managed, for god or ill, and every one of us is part of that reality whether we like it or not. We cannot choose to not deal with people unless we literally work for ourselves only. I don't have that reality, and I'm guessing you don't either ;).

Karen mentioned that there was a value to having a manager/boss that you worked with instead of for. If you can develop a relationship that is closer to that of a peer, you can make amazing strides. true, you do work "for" someone in the literal sense that they sign your reviews and approve your bonuses and pay raises, but outside of that, it is much easier and more enjoyable to work with people rather than under people. As I said before, one of the great experiences of my career was working with Ken Pier, because he emphasized the working with. He was my director, but he hated being a manager. He wanted to be a doer, and when it came to the work of our team, he shouldered as much work as the rest of us, and often more. He wasn't an office manager or a bureaucrat, he was in the trenches with us, every day, and that make working with him both easy and enjoyable.

Along with managers, we have co-workers. Other testers, programmers, project managers, along with a myriad of other people. An important question to ask is "would I want to work with this person again at another company? If I had to change jobs and companies tomorrow, who would I want to bring with me? Who would I want to leave behind?" those people you identify as those you want to take with you, cultivate relationships with them, not in the sleazy "networking" way, but really get to know them and foster a relationship with them. Let them do the same with you.

Many people think that moving testing forward is about technical prowess only, but in truth it also requires people living and experiencing life. It might seem strange to think of work/life balance as a way to move testing forward, but it is important to keep moving and learning and evaluating to keep from becoming stagnant. the fact is, our living and interacting is what lets us actually excel. There's a phrase that I remember hearing about juggling several balls, and that all of the balls are rubber except for one, and that ball is made of glass. What do you do? The point of the story is that the glass ball must never be dropped. The other part of the story is that the glass ball is never the same thing. At a given time, the glass ball may be family. It may be work. It may be health. It may be leisure. The point is, everything will be bounced and dropped at various times, but we need to be alert and aware at what point in time the glass ball's label has changed, and what it has changed to.

Outside of work, there's many opportunities to learn and interact. Conferences are an obvious one, but there are many other ways to get involved in the community at large. Meetups, message boards, weekend testing, organization involvement, even participating in conversations on Twitter all help to foster that sense of community, but for it to matter, we need to engage. I have often said there are many who are consumers, but few who are active producers. It takes some courage to become an active producer, but the great thing is, we all can, and we can all start right where we are and move forward from there.

OK, I'm going to go help handle open season at this point, so I'll be back with you in another post in a bit. Thanks for following along :).

Monday, August 3, 2015

Why CAST? - Reflection from #CAST2015

The first Conference for the Association for Software Testing (CAST) was held in 2006. This year, we are holding the tenth CAST in Grand Rapids, Michigan. Three more years and we can claim to have raised a teenager :).

I discovered AST in 2010. By the time I had joined and learned what CAST was, I was unable to arrange to attend that year. I did, however, commit to attending CAST being held in Seattle in 2011. Part of that was made possible because James Bach specifically invited me to attend, to demonstrate a real world Weekend Testing event as a workshop. Additionally, I took the opportunity to offer a short talk as part of the "Emerging Topics” track, titled “Beyond Being Prepared: What Can Boy Scouts Teach Testers?”.

What I found most interesting about CAST, as compared to other conferences, was what appeared to me to be the complete lack of commercial involvement. I was tired of conferences and webinars where sessions were mostly about “here, buy this tool, and all your problems will be solved”. Instead, I was treated to real world situations, with speakers who are actual day to day, in the trenches software testers. The material was memorable, but more to the point, it was actionable. I could actually use what I learned. Since this first experience, I have participated in each CAST to date (Seattle in 2011, San Jose in 2012, Madison in 2013, New York in 2014 and now Grand Rapids in 2015).

Additionally, I appreciate an emphasis on having new speakers take part in CAST. Last year, I had the pleasure of presenting a talk with a brand new speaker, Harrison Lovell, about “Coyote Teaching”, which was about mentorship. This year, I had the chance to see many new speakers get selected, and I am pleased to say that AST working with Speak Easy helped many new speakers prepare and present at CAST. It’s this willingness and openness to new voices that, I believe, sets CAST apart from other conferences.

As the President of AST, I understand the effort it takes to present a conference, encourage people to attend, recruit speakers to present, and ultimately produce a program that is second to none. Today was our tutorial day, and from the conversations I've had so far, I firmly believe we are well on our way to making that a reality for this tenth CAST. For all of you who took the time to travel and carve out your schedule to be here, either to participate in the audiences or to deliver messages as speakers, workshop presenters, facilitators or volunteers, you have my gratitude. For those who were not able to attend in person, remember, you can still join us by watching webCAST with us.

Here's to the next few days, I think they will be marvelous :)!

Long Fun Cup, I Fill You Up, Let's Have a Party - Live from #CAST2015

Three blocks down one to go :).

This has been a productive and fun day, and I want to say thanks to Dhanasekar Subramanian for putting together an entertaining and informative session.

As we left the third block to fill up on Diet Mountain Dew and cookies (well, that's what I did, I really can't speak for the rest of the participants) we were looking at utilizing a mind map to sketch out the application and look at testing artifacts that we find. That's pretty cool in and of itself, but what about the next project? What could we do to consider and focus on a totally different app?

Truth is, we don't want to re-invent the wheel, but there are a number of key areas that we can ask "what if?" questions about. Instead of making a list of specific questions to make lots of specific mind maps, it can be helpful to have some common "rules of thumb" to draw upon. If you are reading that and want to yell "your honor, Testhead is leading the witness", well, yes, I am. For a lot of you, this is going to seem like a blinding flash of the obvious, but for those who are not familiar with the term, this is where heuristics come into play. Heuristics are wonderfully suited for mindmaps. Sekar in fact has written about, and uses in his tutorial, a good heuristic for testing mobile app coverage.

LONG FUN CUP

Below are the quick and dirty descriptions that Sekar uses to describe these terms. The "sins" are straight from his blog, and they get the point across, methinks ;):

Location: It’s a sin to test mobile app sitting at your desk, get out!
Orientation: It’s a sin to test mobile app sitting at your desk, lie in the couch.
Network: It’s a sin to test mobile app sitting at your desk, switch networks.
Gestures: In the mobile world, app responds to gestures, not clicks.

Function: Does the application fulfills core requirements?
User scenarios: How easy or how hard is it to complete a task using the app?
Notifications: How does the app let us know something needs our attention?


Communication: How does the app behave after interruptions by an incoming call or an SMS?
Updates: How does your device handle updating apps? What happens when we do?
Platform: Why does Apple and/or Android do certain things in a certain way?

What I like about taking a heuristic and turning it into a mind maps is the fact that you can communicate a particular testing strategy up front and very quickly. LONG FUN CUP contains a lot of potential testing horsepower if it is thoughtfully applied. What can also add to the ability to quickly communicate  information is using the labels, tags and other icons to help communicate information quickly. In this case a display of a mind map with the icons for each area can be a quick information radiator. the areas without icons can be seen as areas that still need to be addressed. areas with progress icons can show how much is done. Green check boxes can show that areas pass or are at least not seen to be having issues at this time. red X marks or exclamation alerts can point to potential problems, and text boxes can be filled in with more details or pointers to other documents that provide greater depth. What's more, with the right tools, doing these updates could be done on the mobile devices themselves, making for a nice virtuous cycle.

It's the Map of the Game - Live from #CAST2015

One of the interesting approaches we experimented with was a game called "The Room" which has a slightly Silent Hill puzzle vibe to it, sans the homicidal faceless nurses, of course ;). We spent the better part of a half hour exploring the game and learning about the details we can play with in its tutorial setting.

What was the point to this? Well, other than getting us all to play a game for thirty minutes, we had a chance to consider how we interact with mobile devices and the methods of interaction. I remember when the iPhone was released in 2007, and while I didn't jump on and buy one. I was familiar with them from testing, and it was a definitely different way to interact with a device. Think of all the ways that you interact with an iPhone or an Android device (and sure, A Windows Phone, too, though I've never actually used one). The touch and multi-gesture format, the loading of apps, and the mobile first model is making some interesting changes to how we consume information and the way that sires work. Sekar made the point that a number of businesses in India are doing everything through mobile... they are literally shutting down and turning off their traditional desktop web sites!

Interacting with mobile devices brings some fundamental challenges that working on a desktop or laptop device do not. Beyond touch alone, there are issues of the screen space and how characters are displayed, as well as the input options and how the user interacts with the product. Unlike a traditional web site, where data entry at a QWERTY keyboard is required, mobile apps are meant to be optimized for minimal typing and movements. Many updates and data items are created by using the accelerometer. or via geolocation, or through other methods. In other words, orientation and motion affect how an app works (in some games, orientation and motion are specific actions required for the games to be played).

One thing I have learned from testing is that the battery state has a lot to do with the way that a system responds. Systems work better with phones that have a greater charge as compared to those that have a lower charge. Screen orientation can greatly affect what the app displays (horizontal vs. vertical). What do we see when we switch from horizontal to vertical? Data plans can vary wildly, and the amount of data we can use also varies. To allow us to leverage WiFi hotspots, the phones are able to jump onto known networks whenever we are in the vicinity, provided we have met the requirements for authentication. How does our app respond when it jumps from cellular data network to WiFi and back?

This is the point where mind maps and mobile testing can come together. Mind Maps are actually accessible and modifiable via mobile devices, more so that large text documents with a lot of text are. By using a mobile device while we test it, we can capture interesting observations and update a mind map via a mobile device more quickly than locating a desktop computer or laptop and making the updates.

Sekar provided us with a detailed mind map that shows some of these considerations, so we're going to go back and reconsider some of the tests we performed on Hop! with this in mind, and utilize the map to help guide our efforts... indeed, we are much more focused and targeted with our testing and exploration this time around (I guess it really does help navigation when you have a good quality map :) ).

A CAST of 1,000 - Live from #CAST2015

I'm smiling a little bit today as I have realized that this is a special post for me. This particular post is number 1,000 on Testhead. When I started this blog, I don't think I ever imagined I would get to 1,000 posts, but I am still here, and you are still reading, and both of those realities make me very happy. Forgive the title, but it's just a quick celebratory dance, and on with the show.

We've been focusing on mind maps during the morning in Dhanesekar Subramanian's Tutorial on "Mobile App Coverage Using Mind Maps". The first part of the morning was spent discussing how to create mind maps and the properties of what makes a mind map effective. During this part of the morning, we are looking at some ways that a mind map can be use to develop a testing strategy and actually tracking coverage and progress. In addition to creating topics and sub topics, we can create markers that have semantic meaning, and we can also create nodes that have markers that show pass or fail, or can show level of completion:


A slightly silly example, but this shows how you can layer images on a node to convey information.


This is an interesting avenue for communicating exploratory test ideas. as I was playing with the admittedly simplistic music mind map, I realized that this could be a way to communicate a research coverage for a topic, or to be used for making quick notes for application test coverage. Instead of having to read through a bunch of minute details, especially if the testing is mostly clean with little in the way of comments. If we have sub-sections that can be marked as underway, completed, or those with issues, then I can drill down to see the issue and defocus on the rest of the stuff that is clean.

As an exercise, we are asked to look at a Mobile app from AirFrance called "Hop!" One of the areas that we initially started looking was to take the app and split it into the major functions on the screen (Book a Flight, Check an Itinerary, etc.) As we were looking at the workflows available to us, the natural reaction was to go through and break down the steps of the workflow into their own nodes, and give each option a node of its own. What we realized by doing this was that we made a lot of branches and separate paths for individual tests... our mind map quickly became cluttered with step details, and we had only looked at one button. If we were to do this for every option, we would have a very detailed and very messy mind map to try to look at. Instead, we considered looking at each workflow and display each workflow as a single node, at least to start with. The benefit to this is that this allows us to give the app a broad view first, and to create the important nodes that are the essential interactions. There will certainly be branches we can drop into, but rather than chasing down minutiae of steps, we can word the nodes so that they are unique workflows without the need to atomic specificity.

One of the things I am realizing with mind maps is that there are ways that popping a node can change the purpose of the map. By adding a "heuristics" node, we can turn a mind map that is a static model of an application into a test strategy, and do so with very little work. Even with these discoveries and neat uses, the biggest challenges with mind maps, unless they are very basis, is the fact that communicating the meaning of the map will take some communication. I've had mind maps sent to me as notes from a talk or a presentation that were very clear and understandable, and I've received mind maps that were undecipherable, at least to me they were. However, when I started talking to the person who made the map, and got to understand their individual context or the way that they were communicating the information, then the system became clear, and I understood the reasoning behind what they were recording.

It's time to take a break for lunch, so I'll chat with you all again in an hour or so. I hope you are finding this as fun as I am :).

CASTing my Line into Mind Maps - Live from #CAST2015

Today the rubber meets the road. We are going live today with CAST 2015, breakfast has happened, the setup for all of the bags and badges has happened, the tables are manned, the food is being served, and the prep work for the webCAST is happening now (which I do not have any real involvement with other than to be grateful that it is happening and that Ben, Dee and Paul are making it happen).

Monday at CAST is tutorial day, and since the participants pay extra for the content, I have traditionally not done a play by play of these sessions, but I will talk a bit about why I chose to be in the one I am in and what my role is in being here. I'll also talk a bit of my own observations around the topic without repeating Sekar's presentation. Also, to be kind to my readers, I will split these posts up into different posts, since many have said it's hard to follow a big long post throughout the day.

First off, each of us who are on the board and attending a tutorial are doing so as the room helper or facilitator for the tutorial. One of the things I asked all of us on the board who wanted to participate to do was to work the back channels for the event. Since there was a limited number of seats for each tutorial, we wanted to make sure that the participants in the conference go the first chance to be there. We also made sure that each of us picked a different tutorial to be part of so that we would be able to evaluate the individual sessions and be able to report back from each of them as to their effectiveness, and things we could learn to help the next year's organizers with choosing and developing solid sessions.

There are four sessions being offered this year for tutorials:

Christin Wiedemann is leading a tutorial called "Follow your Nose Testing - Questioning Rules and Overturning Convention

Dhanasekar Subramanian is leading a tutorial called "Mobile App Coverage Using Mind Maps"

Robert Sabouring is leading a tutorial called "Testing Fundamentals for Experienced Testers"

Fiona Charles is leading a tutorial called "Speaking Truth to Power: Delivering Difficult Messages"

Since we all opted to spread ourselves around the tutorial choices to be the room helpers and facilitators, I chose to work with Sekar and be part of the "Mobile App Coverage Using Mind Maps" tutorial. One of the reasons I chose to focus on this tutorial was that I have seen a variety of mindmaps used by people over the years, and I tend to focus on a fairly simplistic use of them. For me, I tend to use a core concept, and branch a few ideas off the core, and then break them down to a few words in the branches. If I need more depth, rather than make complex mind maps, I will usually just create a new mind map with another concept. the idea of having multiple branches on the same map just feels messy to me, but at the same time, having to jump to multiple maps is also messy.

Sekar's tutorial is covering two concepts at the same time. The first is giving participants a chance to work with mind maps who may not have done so in the past. The second part is testing with mobile apps and categorizing the details of the app. the benefit to using Mind Maps in the process of testing is less the rigid use of the tool and more of idea that each core concept can have several points where we can branch off.

One of the fun things we do in these tutorials is get everyone on the same page and playing with the discussions. In this tutorial we are encouraged to break into groups and discuss a variety of topics. For mine I chose music, and what I find interesting with mind maps is not so much what we add to the mind map, but why we add them. My map for music was broken into Instruments, Songs, Genres and Modes. Why did I choose those? Possibly because I am a musician, and those are the things I think about. My guess is that a casual listener might not even know that modes exist, so they wouldn't include it as part of their breakdown.

It's time for our morning break, so I'm going to call and end to this post. I'll be back with another one in a bit. Thanks for joining me today :).