On Monday while I was at CAST, I was the room helper for Dhanasekar Subramaniam's tutorial about "Using Mind Maps for Mobile Testing". Much of the session was around heuristics for mobile testing and the ability to capture heuristics elegantly inside of mind maps. as part of the process, we spent a bit of time creating mind maps in XMind to use later with our chartered test sessions. I've done this before. I've even created a full mind map of the entire James Bach Heuristic Test Strategy Model (yes, one mind map and yes, when fully expended it is massive. Probably too massive). As we were working to create nodes and sub-nodes, Sekar pointed out that there were many labels that could be applied to the nodes, and that the labels were additive. In other words, each node could have several labels applied to them.
As I was looking at this, and seeing labels such as pie chart fill, green check boxes, people silhouettes of several different colors, red x's, green yellow and red explanation points, and many others, I started thinking about how, by color and proximity, we could gauge how much coverage we have given a particular mindmap (or in this case, how completely we have applied a heuristic to testing) and what the results were for that. Instead of stopping to write down lots of notes, each node we were testing would get a label placed, and each label would have a semantic meaning. Green check box meant good, red X would mean failed or something wrong, a quarter pie chart would mean a quarter done, a yellow square would mean something that was a warning, but maybe not an error. Different color people icons would mean the person who performed that set of steps, and so on.
As I was looking at this, I joked with Sekar that we could tell the entire testing story for a feature or the way we applied a heuristic to a story in one place with one relatively small map. We both chuckled at that, and went on to do other things.
The more I thought about this, though, the more I liked the idea. In a previous company, we set up a machine and had a couple of flat screen monitors attached. These flat screens were placed in the main area and left on, cycling the images that were shown, only in this case, they were graphs and pages of results that were relevant to us. In short, they were acting as information radiators for our team. At a glance, we could know if the build had failed, if deployment was successful or not, and where the issue would be if there was one. We could also use this technique for information radiation. Imagine a charter or set of charters. Each one had their own mind map. Each mind map could be cycled through presentation on the monitor(s). the benefit would be that, at a glance, the team would know how testing was going fort that area, and we could update it all very quickly. I kept experimenting with it, and the more I did, the more I became convinced this just might work.
To that end, I am holding a Weekend Testing session this coming Saturday, August 8, 2015 at 10:00 a.m. PDT. We will look at mind mapping in general and XMind in particular, and we will develop a small heuristic for a feature (within XMind itself) to test and to update. I really like this idea, but I want to see if it can be tinkered with, and if it might be a useful approach to others.
If you think this might be a fun way to spend a couple of hours, come join us on Skype. Contact "weekendtestersamericas" and add us as a contact. On Saturday, get on Skype about 20 minutes before the session and say you want to be added to the session. Prerequisite, if you want to follow along and actually do the exercise, would be to download the XMind app for your platform.
Again, I apologize for the short notice, but I hope to see you Saturday.