Sunday, November 30, 2014

Book Review: Pride and Paradev

As I have been looking through and wanting to dig out of my deep hole of collected and unread book titles, I also wanted to look at and give attention to books that I have that are a little less common or advertised.

I completely support the model of self-publishing and services like LeanPub and Lulu, and the opportunities it gives to those in the field to publish their ideas without having to wait for a formal publisher to put it out. Additionally, I wanted to start having a give and take between books that are recent (like my review for “More Agile Testing”) and titles I’ve had for a year and more and haven’t yet reviewed. To that end, I am excited to give some time and attention today to Alister Scott’s "Pride and Paradev”.

“Pride and Paradev” is a short e-book, clocking in at just under 100 pages, yet it is probably the most concise and specific “what” book yet written about Agile testing and the contradictions that exist. In fact, the book’s subtitle is “A collection of agile software testing contradictions". Every question that is posed is answered with a yes and a no, a should and a shouldn’t, with clear explanations as to why both make sense in given contexts.

Alister is the author of the WatirMelon blog, and all of these contradictions are explored there. Wait, if that’s the case, then why should we go and get this book? Lots of reasons, really. For starters, they are all gathered together in one place, which makes it convenient. It can also be loaded onto your favorite reading device, or downloaded to your computer, and used offline as you see fit. Also, 10% of the proceeds from the sales of the e-book go towards helping homeless in Australia, which I think is a perfectly awesome goal :).

Back to the book, and specifically the title, Alister states that a “paradev” is anyone on a software team that doesn't just do programming. That definitely includes software testers. As he pointed out in the introduction of the book:

“…some quick etymology […] para is also used to indicate “beyond, past, by” (think paradox: which translates to "beyond belief" ). This same reasoning translates paradev into "beyond dev" or "past dev". […] paradevs are the people on the team that don’t box themselves into a narrow definition, happy to be flexible, and actually are happy to work on different things.”

The contradictions that Alister focuses on in the book are as follows:
  • Do agile teams even need a software tester?
  • Do agile software testers need technical skills?
  • Are software testers the gatekeepers or guardians of quality?
  • Should agile testers fix the bugs they find?
  • Should testers write the acceptance criteria?
  • Is software testing a good career choice?
  • Is it beneficial to attend software testing conferences? 
  • Should testers get a testing certification?
  • Should acceptance criteria be implicit or explicit?
  • Should your acceptance criteria be specified as Given/When/Then or checklists? 
  • Are physical or virtual story walls better?
  • Which is better: manual or automated testing?
  • Can we just test it in production?
  • What type of test environment should we test in?
  • Should you use test controllers for testing?
  • Should you use production data or generate test data for testing?
  • Should you test in old versions of Internet Explorer?
  • Should you use a tool to track bugs?
  • Should you raise trivial bugs?
  • Should you involve real users in testing?
  • Do you need an automated acceptance testing framework?
  • Who should write your automated acceptance tests?
  • What language should you use for your automated acceptance tests?
  • Should you use the Given/When/Then format to specify automated acceptance tests?
  • Should your element selectors be text or value based? 

Each of these questions are bolstered with quotes from programmers, testers, writers, celebrities, politicians, philosophers, and others to help make the case for each of the points where appropriate (and yes, it adds a dose of fun to the sections).

The book ends with three non-contradictions, which sum up the rest of the book pretty handily:
  • You can only grow by changing your mind.
  • Everything is contextual.
  • You can always choose your reaction.

Bottom Line: Testing is often more than just testing. It involves many disciplines, and in that way, testers go beyond just the programming of software. If you chafe at the title of “tester” and feel in the mood to provoke some interesting conversations, start referring to yourself as a “paradev” and see where the conversations go. If you do that, I would highly recommend getting this book and reading through its contradictions, and decide when and where the contradictions are those you should heed or ignore, do or do not. It’s ultimately up to you. As for me and my testers, including my daughter, I’m going to encourage discussions around being a “paradev”, and I’m going to use this book to do exactly that.

Saturday, November 29, 2014

Teaching My Daughter to Code and Test: Beginnings

Earlier this month, I brought my 13 year old daughter Amber home from a presentation at Google/YouTube called "Made With Code". Amber came out of the presentation energized, excited, and saying how cool it was what they showed her.

This is the first time that Amber has been "excited" about how computers work outside of a "users" perspective and thinking about computers and computing from a "makers" perspective. As we talked, I was thinking about some freelance coding that I do, and how it might be fun to have her learn a bit about web development, do some stuff and "push it to production", and pay her a bit for her efforts.

It also seems that there might be some interesting "discoveries" and reactions from two perspectives, mine and hers, that might make for a cool blog series. With that, we both decided to work towards building some practice times into our days, and see if the concepts that I have been learning over the years are easily teachable, or if I might learn more from her interactions than she does from me.

Additionally, I figured it would be more interesting to see the experiments and the realizations we come to in somewhat real time rather than wait several months to do a more formal synopsis, so that's also what we will be doing over the coming weeks and months (for those who wondered if my Tsundoku post owed in part to this initiative, the answer is "definitely yes" :) ).

So you will start to see some "shared posts" in this space. If it's my perspective, it will be in a standard layout and font color. If you see highlighted green text, that's Amber, speaking in her own words. Over time, she may choose to make full blog posts here, and those will reflect that in the title, but for now, just know when you see green highlighted text, that is her.

One of the things we decided to start with was to get her focused on something simple, where it would be easy to see and make changes. To that end, we thought it would make sense to have her practice with Codecademy to learn basic details of web formatting and style. Early on, we both decided that a little each day would be a better approach than trying to do a whole bunch at one time.

For the past three weeks I have been working on learning the basics of HTML and CSS in Codecademy. I have found that it is easier to do a little of it every day and keep the streak up than do it all at once in one big shot. I have learned the very, very basics.

One of the things that has been fun, if not a little annoying, has been to have my Dad sitting next to me and helping me with some of the assignments and examples. I say fun because I like the fact that he can help me understand what is happening. I say annoying because he's my Dad. What I mean by that is that sometimes he's a little to quick to tell me when I am doing something wrong without me learning myself. We finally made an agreement that I would work on my own computer and that I would call him over only when I felt stuck or confused. While I appreciate his input, I told him I was not going to learn anything with him always hovering over me and telling me what to do.

Other than that, I am happy to say that, since joining Codecademy on November 10th, I have done a little bit each day, and I have a 19 day streak as of now. My Dad checked up on me every day that he was away in Ireland to make sure I kept my streak alive, and I was happy to say I did.

It has been interesting to see the ways that Amber interacts with me as we work together on the Codeademy projects. Speaking of which, we have a session scheduled for a little later today (and that will extend her streak to twenty days ;) ). I am really curious to see where this journey will lead us both, and what we both learn from the experience.

Friday, November 28, 2014

Book Review: More Agile Testing

In honor of running into Janet Gregory at EuroSTAR and listening to her talk about "Testing Traps to Avoid in Agile Teams", I told her I felt it only proper to break my Tsundoku and commit to reading “More Agile Testing” on my flight from Ireland to Toronto and during my layover for my flight back to SFO and review it before I got home. Did I succeed? I did, and I am glad I had the dedicated time to do exactly that. This book is so rich with information that you will need to spend some quality time with it.

First, let’s set some context. This is the sequel to the "Agile Testing" book that Lisa Crispin and Janet Gregory wrote back in 2008, and that I did a review of in the early days of the TESTHEAD blog back in 2010. In that review, I said that I didn’t have the time in an Agile team to give the book justice, so I reviewed it on how I thought that Agile seemed to me and the advice given. This time, with More Agile Testing, I have four and a half years of experience with Agile teams, and I can categorically say yes, this book addresses many of the challenges Agilists go through, especially Agile Testers.

Agile has grown and matured over the past several years. Some may say it has a clearer picture of itself, others may say its become fragmented and just another marketing gimmick. Some may complain that Agile programming is a thing, but Agile Testing? All of this points to the fact that there are questions, dilemmas and issues in the world of Agile, and nowhere is that more clear (or more muddled) than for the Agile Tester. Are we an appendage? Are we an integrated member of the team? Are we an anachronism? What about DevOps? Continuous Delivery? Testing in Production? Lisa and Janet take on all of these issues, and more.

More Agile Testing is not a “how” book. It’s not filled with recipes of how to be an Agile tester… at least not on the surface. Don’t get me wrong, there is a ton of actionable stuff in this book, and anyone working with Agile teams will learn a lot and develop some new appreciation and approaches. What I mean about it not being a “how” book is that it doesn’t tell you specifically what to do. Instead, it is a “what” book, and there’s a whole lot of “what” in its pages. Like its predecessor, More Agile Testing does not need to be read cover to cover (though that’s a perfectly good way to read it, and the first time through, i’d highly recommend doing just that). Instead, each section can stand on its own, and each chapter is formatted to address specific challenges Agile teams face. 

The book is broken up into eight sections. The first is an overview of where Agile has evolved, and the new aspects that are in play that were not so prevalent in 2008 when Agile Testing came out. In addition, it takes a look at the ways that organizations have changed, and the new landscape of software development for applications that span the gamut from desktop to web to mobile to embedded to the Internet of Things.

Section Two is all about Learning for Better Testing. From determining roles and adapting to new needs, to developing T-shaped team members to make box shaped teams, and helping testers (and those interested in testing) develop more in depth thinking skills and work habits to be more effective. 

Section Three focuses on planning. No, not the massive up front planning of traditional development, but the fact that even the just in time and just enough process crowd does more planning than they give themselves credit for, and that the ways we do it can be pretty hit and miss. This section also goes back to the Agile testing quadrants and reviews how each has its own planning challenges. 

Section Four focuses on Testing Business Value. In short, are we building the right thing? Are we getting the right people involved? Do we have a clear vision of what our customer wants, and are we engaging and provoking the conversations necessary to help deliver on that promise? This section focuses on developing examples and using methodologies like ATDD and BDD, and identifying what we do know and what we don’t know.

Section Five places an emphasis on Exploratory Testing. What it is, what it’s not, developing testing charters, working with personas and tours, and working with the other varieties of testing needs and helping make sure our explorations also include territory not typically considered the realm of the explorer (such as Concurrency, Localization, Accessibility, UX, etc.)

Section Six focuses on Test Automation. Note, this talks about the concepts of test automation, not a prepackaged approach to doing test automation or a specific framework to use and modify based on examples, though it gives plenty of links to help the interested party find what they are looking for and lots more. 

Section Seven is all about context, specifically, what happens when we address testing in different organizations and with different levels of maturity and tooling? Version control, CI, and working with other teams and customers are addressed here, as are questions of Agile in a distributed environment. 

Section Eight is Agile Testing in Practice, and focusing on giving testing the visibility it needs to be successful.

Appendix A shows examples of Page Object based automation using Selenium/Web Driver, and Appendix B is a list of “provocation starters”. In other words, if you are not sure what questions you want to ask your product or your programmers as you are testing, here’s some open ended options to play with.

In addition to the aggregate of Lisa and Janet’s experience, there are dozens of sidebars throughout the book with multiple guest contributors explaining how they implement Agile in their organizations, and the tapestry of similarities and differences they have seen trying to make Agile work in organizations as diverse and different as each of the contributors.

Bottom Line: If you are brand new to Agile software development and Agile testing, this may not be the best place to start, as it expects that you already know about Agile practices. Having said that, I didn’t see anything in this book that would be too hard for the beginner with team guidance to consider, implement and experiment with. However, if they have already read Agile Testing, and are hankering for more ideas to consider, then More Agile Testing will definitely help scratch that itch. Again, this is not a “how” book. This is a “what” and “why” book, but it has lots of great jumping off points for the interested Agile tester to go an find the “how” that they are looking for. As a follow on and sequel to an already solid first book, this is a welcome update, and IMO worth the time to read and reread.

Solving my Tsundoku: The Return of TESTHEAD Book Reviews

As I was settling in and preparing for my journey home from Ireland (which will include a flight from Dublin to Toronto (roughly eight hours), plus a seven hour layover, and then a flight from Toronto to San Francisco (roughly another seven hours), I figured it was a great time to dig in and work through some of the books I have received to review, as well as some I have picked up to work with due to the work that I do as a tester and occasional programmer.

I receive a number of e-books in PDF format from a variety of sources. Some are offered to me free for me to review, many more are purchased by me to work through and bulk up my geek brain (well, that was the goal in any event). At some point, the desire to read and apply got overtaken by the real life aspects of work, family, testing initiatives and other things I do. All the while, my book pile keeps getting bigger and bigger.

Zeger van Hese gave a great keynote talk at EuroSTAR this week, and in the process, he talked a bit about those great linguistic terms that English doesn't have a succinct single word for. One example that resonated with me (to the point of causing physical discomfort, to be honest) was the Japanese word "Tsundoku" (積ん読, (hiragana つんどく) which is, according to Wiktionary "(informal) the act of leaving a book unread after buying it, typically piled up together with other such unread books".

The first way to deal with a problem is to realize you have a problem, and to that end, I have decided I am going to do something about that problem. How so? By making one of my "bold boasts" I make from time to time. Since we are still a few weeks before New Years, I cannot be accused of making a New Year's Resolution (since I don't make them ;) ), but I can declare a new goal, and that new goal is that I shall henceforth and forthwith start whittling down my book collection and actually read, apply and review the stack of books that I have. To that end, you may expect, in no particular order, reviews to start appearing for the following:
  1. Accessibility Handbook
  2. Apache JMeter
  3. Application Testing with Capybara
  4. Backbone.js Cookbook
  5. Beginning PHP 6 Apache MySQL 6 Web Development
  6. Build Your Own Web Site
  7. Computer Science Programming Basics in Ruby
  8. Confident Ruby
  9. Crackproof Your Software
  10. Design Accessible Web Sites
  11. Design Driven Testing
  12. Eloquent JavaScript, 2nd Edition
  13. Everyday Scripting with Ruby
  14. Exceptional Ruby
  15. Good Math
  16. Head First Ajax
  17. Head First HTML and CSS
  18. Head First HTML5 Programming
  19. Head First JavaScript
  20. Head First JavaScript Programming
  21. Head First Mobile Web
  22. Head First PHP and MySQL
  23. Head First SQL
  24. Head First jQuery
  25. Higher Order Perl
  26. How Linux Works
  27. Jasmine JavaScript Testing
  28. JavaScript for Kids
  29. JavaScript Security
  30. JavaScript Testing Beginner's Guide
  31. JavaScript and JSON Essentials
  32. JMeter Cookbook
  33. jQuery Cookbook
  34. Kali Linux Network Scanning Cookbook
  35. Lauren Ipsum: A Story About Computer Science and Other Improbable Things
  36. Learning JavaScript Data Structures and Algorithms
  37. Learning Metasploit Exploitation and Development
  38. Learning Python Testing
  39. Manga Guide to Calculus
  40. Manga Guide to Electricity
  41. Manga Guide to Physics
  42. Manga Guide to Statistics
  43. Mastering Regular Expressions
  44. Metaprogramming Ruby
  45. Metasploit Penetration Testing Cookbook
  46. Metasploit The Penetration Testers Guide
  47. Modern Perl
  48. More Agile Testing
  49. NGUI for Unity
  50. PHP MySQL JavaScript HTML5 All in One for Dummies
  51. Pride and Paradev
  52. Pro HTML5 Accessibility
  53. Python for Kids
  54. Rails Crash Course
  55. Regular Expressions Cookbook
  56. Responsive Web Design By Example
  57. Robot Framework Test Automation
  58. Ruby Wizardry
  59. Running Lean
  60. Selenium Design Patterns and Best Practices
  61. Selenium WebDriver Practical Guide
  62. Snip, Burn, Solder, Shred: Seriously Geeky Stuff to Make With Your Kids
  63. Specification by Example
  64. Test Driven Web Development with Python
  65. TestComplete Cookbook
  66. TestComplete Made Easier
  67. The Art of Application Performance Testing
  68. The Art of Software Testing, 3rd Edition
  69. The Selenium Guidebook
  70. The Well Grounded Rubyist
  71. Web Development with Django Cookbook
  72. Web Penetration Testing with Kali Linux
  73. Webbots Spiders and Screen Scrapers 2nd edition
  74. Wicked Cool PHP
  75. Wicked Cool Ruby Scripts
  76. Wireshark Essentials
  77. Zero to One
Yes, this is a line in the sand. Yes, I intend to fix this problem of mine. No, I cannot say which order these reviews will appear, but be sure, they are coming. Yes, I encourage you all to call me on it if I slack off.

One way or another, this begins today, and it will not finish until all of the stack is read, worked and commented on. That may take awhile ;).

Thursday, November 27, 2014

Green Grass and High Tides: Day Three at #esconfs

It's been a great few days here in my briefly adopted home. So many great opportunities to speak with friends from the Twittersphere, blogosphere and other arenas where personal "meetspace" has not been a factor, but having that in person opportunity has proven to be so worthwhile.

I've had the pleasure of meeting so many virtual friends in person, with a highlight being meeting and enjoying dinner with Julie Gardiner (new in person acquaintance) and Dawn Haynes (who I've known in person and have worked with in various capacities the past few years), among others. The awards dinner was held last night at Croke Park, home to a rather large stadium dedicated to Hurling and Gaelic Football. Below for your amusement is a shot of yours truly trying to get his hand on hitting a ball with a hurling stick (and if I'm mangling the lingo, please forgive me ;).

A bit about Dublin. The city center is small, easily walkable, with lots to see and do. The blending of old and new is everywhere apparent. One of the buildings that I saw had an inscription over the top saying it was for the "British and Irish Steam Packet Company". I am not even going to pretend to know what a Steam packet is, but it was cool to see these old buildings now being the current homes of telecommunications and web design companies, among other things.

tody is the final day of the conference proper, and there's a change in the program. One of the Keynote speakers had to drop out at the last minute, so my friend Shmuel Gershon will be delivering the morning address. Speaking of which, I should get up there so I can actually report on it.


Shmuel started his talk with the fact that our world is completely dependent on software. Fifty years ago, this was not the case. Seventy five years ago, there was no software to speak of for the vast majority of people (and for those who did interact with it, it was in its infancy). Today, we cannot imagine living our lives without it. In some ways, we as people have a chance to have a taste of virtual immortality. It's both macabre and fascinating to think that things like my Twitter account, my Facebook feed, or this blog could conceivably live on after I do.  Our ability to "persist" outside of the memories of our families and friends has, up until now, been limited to a small number of celebrated people (politicians, philosophers, scientists, celebrities). today, common everyday people have a chance to have a piece of their ideas and ideals live on after them.

We look at books as a way to retain knowledge and transfer knowledge. This has been a means of transfer for hundreds of years. they are permanent, solid, and transmissable, but they are difficult to change (books need to be reprinted to get new versions into peoples hands. Today, with the development of software and electronic books, updating those titles is much easier, and the distribution is as simple as a button click.  In previous years, if a publisher wanted to send me a physical book to review, it incurred a printing cost and a shipping cost to get it to me. Today, most of the books I review come in the form of PDF's and through email messages, with links to download. There is a cost associated with it, but it is much smaller than before, and getting updated copies are, again, just a button click away.

Software is more than just a product. It is now a primary means of transferring knowledge and information. Having a product by itself is no longer sufficient. Now we need to be clear that the software we are producing is actually capturing and transferring the knowledge that we have and understand. This change is also filtering into the way that we test. Having the functions work the way we expect it to not enough. We need to make sure that we are transferring knowledge and sharing information with our software creations. Programmers are filling software with knowledge, bot tacit and explicit. we need to be clear that we are able to make sure that both the knowledge and the mechanisms to transfer it are intact. That's an interesting shift in mental paradigm.

Shmuel used the example of how Portugal was looking for a way to get to India via another route than around Africa. The mission was to find a new route to India, but in the process, the explorers discovered the South American country of Brazil. The technical mission could be considered a failure. What are you doing wasting time on this place called Brazil? You need to find a new route to India. Fortunately for Portugal, they saw that the discovery of Brazil was a fundamental change to their world and their interests. In other words, the goal of reaching one mission successfully could be seen as a failure, but the discovery and new opportunities that come from it can be substantial. Let's not be so myopic that we miss the great opportunities that we may literally stumble upon.

Most software that is developed that has staying power (think of many of the most prolific and long lasting UNIX tools) start as a need to "scratch a personal itch" for the programmers that are creating them. Most of the tools that we use and that are successful, especially the ones that have deep penetration and are freely available, their staying power is specifically because the needs that they met are generally universal. They scratch a personal itch, to be sure, but it's a good bet that the itch being scratched affects a lot of other people. Getting that feedback from others helps to determine which products will stand the test of time. The products that scratch the highest numbers of personal itches will have staying power.

When we are testing software, it may feel strange to think that we are really testing the transfer of knowledge, but once we do get that, our very vernacular of how we talk about the work we do changes. We live in an era where knowledge can be nurtured and developed in a variety of ways. Ultimately, we need to get out of the mindset of "have we shipped the product yet" to "are we providing our customers with the best way to help them transfer knowledge that is essential to how they do business". Shmuel gives us a bold statement to consider. "We are not shipping a product, we are sustaining and preserving civilization". Try that on for size for awhile, and see if that doesn't make you think about things a bit differently ;).


Next up is "Testing Traps to Avoid in Agile Teams" with Janet Gregory.  Having spent the past few years working as an embedded tester in an agile team, both as a Lone Tester and as a part of a broader testing team, I definitely have lived through much of this.

One of the aspects that has been a big change is the idea that we have to wait for a build before we can do any testing. I do remember this well in my previous team because we had a push to a demo machine that needed to be done. The term mini-watefall gets thrown around, but perhaps a better term is the "ketchup effect", where the testers are tapping on the ketchup bottle and waiting for the ketchup to come out. when it finally does, it hits the food with a big chunk of ketchup. testing is like this when we have to wait for the build to do our testing. In my current environment, we have set it up so that all of us have access to the build environment and build machines that we can access. I am able to load a build within minutes of a programmer committing changes. It's really cool that, on any given day, I can have a chat with the programmer about something they are working on, them alerting me that they have committed a change, an I can load that build within a few minutes of that change.

More to the point, there is a lot of testing that can be done between builds, and ways that we can loop around on the testing needed with the idea that, once we get the most recent build, we can get back to see the changes.

I know the feeling of having the programmers be the Agile component, where I was on the outside of the development process. I have had to wait until the stories come together before I can do any meaningful testing. This can still be a problem where I currently am, but we have been encouraged to be part of the process as early as possible. We practice the Three Amigos model, and that Three Amigos approach allows us to get involved very early in the process. Still, even with that, there are many times where we are waiting for DevComplete to occur before we get involved on a story beyond that first kickoff. At times, we have been able to do direct paired programming and testing, but it is more common to have the programmer do the initial work on the story before we get into the main testing. At times it can be valuable to get in immediately, at times it makes sense to wait until everything is in place for the first round of testing. We don't have to always be inserting ourselves into the initial programming, but if we can be helpful to the programmer during that early phase, then by all means, let's do that.

I've long struggled with the idea of being the "Quality Police" and trying to get out of the mindset of being the person who says "go or no go". By getting the team to focus on the quality of the product, rather than just the testers doing the legwork, we are able to get everyone's eyes involved and engaged. One of the things we do in our stories is we get into looking at acceptance criteria and the implementation. We don't file bugs for stories in process. Instead, we work through issues we discover and put the story back on the line. It's been a system that has worked pretty well, but there are ways we could probably do it more efficiently or with less turnaround time. A tighter feedback loop would certainly help with this. Additionally, testers should become more technically aware and understand the programmer's vernacular.

Automation is important, but there's a lot of testing that should be done manually. Having said that, there's a lot of repetitive work that should be automated, and keeping on top of that is a big job. Currently, my efforts are focused on more manual testing, but everyone on the team has both the ability and the chops to do some automation work. We have a dedicated automation toolsmith to handle the bulk of that work, but we all have the ability and the expectation to help out where we can. Still, there's a fair amount of automation that I think we could all be doing on the team to help us get ahead of the curve.

Having a large suite of automated tests, both at the unit test level and at the integration level, helps us keep on top of a number of things. There's also a fair amount of modification that has to be done from time to time. We try our best to make sure that the testability  is there, and that the automation doesn't need to be modified regularly, but of course, things happen and new features get added all the time. The key goal is to focus on getting our most general tests and workflows automated, so that we can look at the special cases, or allow us to explore and look with fresh eyes on the areas that we may not be covering yet (am I sounding like a broken record with that yet ;)?).

One of the bigger dangers is that we sometimes forget the big picture. When we work with big systems, (or when we do some large scale update) we can get myopic and focus on something too intently). Sometimes these large focuses conflict with what other people do (some changes for accessibility, as an example, have had to be reconsidered because large swaths of automated tests ended up breaking because we shut off access to elements that we didn't intend to. Making thorough workflows and making sure we can complete them from start to finish, and in parallel with other workflows, can be a big step in helping us see how workflows interact and have a carry-on effect with other workflows. In our environment, every story has a unique branch that gets merged to the main branch, and going through and making sure that these dozens of parallel branches keep the peace with each other is an interesting process.

So overall, I recognize areas where my team can improve here, but overall, I think we do pretty good, all things considered. I'm looking forward to seeing how we can refine this list over the coming weeks and months.


Next up, "Diversity in your Test Team: Embrace it or lose the best thing you have" with Julie Gardiner. Julie has been someone I have seen multiple times, but never in person. She's been recorded for various conferences, and I've enjoyed her delivery and sense of humor, and the people aspect she delivers in her message.

Just like there is no one size fits all in test cases, there's no one size fits all in testers, either. Diversity is a hot topic at the moment, but too often, we talk about diversity when it comes to the makeup of the team members alone. It's not just about their genetic makeup and the variety thereof, but the skill sets that they also bring to the table. When we look at getting different genders, different cultures, different life experiences, and then insist that they all do  the same work the same way, we are totally missing the point. True efficiency and effectiveness from the team needs to consider what each person brings to the table, and works to maximize those efforts and abilities, while respecting the fact that not everyone is interchangeable. Some are familiar with the Dreyfus Model of Skills Acquisition. It's a good bet that, even if we get a roomful of people with similar skills and technical background, we would be able to plot the levels each persons skills fall on the scale from one to five (or Novice, Advanced Beginner, Competent, Proficient, and Expert). If we are truly honest, there will be a continuum with all of the skills represented. By being honest with what each person can do, we can give them the guidance they need, or we can give them the freedom to do what they do best.

Group dynamics are also part of this equation, and the ways that people communicate varies. Julie put up a questionnaire and some ecxamples of communication styles. Julies'e examples are listed in the pictures. I'll post mine when I have a little more time ;).

The idea is that, if we take the number of each side (x and y axis, we can plot where each person lands on the XY grid. As you might guess, few people will be in the same place. There will be a broad distribution, and the more people we have on the team, the more likely the list will be distributed. The general breakdown has four quadrants, and in those quadrants, we have The Pragmatist, The Facilitator, The Analyst, and the Pioneer.

Each are an archetype, and if we are honest, each person falls on a continuum of each (few are totally one quadrant). The point is, each person is going to approach their work, their communication, and their methodology. managing these people are going to require different approaches, and each will have different needs.

Change and process improvement also fall on a continuum with people. Not everyone is comfortable with dramatic changes. The Pioneers are more likely to be, while the pragmatists are less likely to be. The facilitators are game if they can collaborate on the process, an the analysts will want to see the theory and work the problems to be sure they are on the right track. Do you recognize yourself in these representations? Do you recognize your team members? Are they duplicates of you? Of course they aren't. They have their own distributions and their own avenues for how they like to work.

The key takeaway is that we need to be more aware of the fact that diversity is more than just gender, ethnic background, sexual orientation and the typical breakdown that we keep hearing about. Don't get me wrong, those are very important, and the greater the distribution of those items, the more likely you will get diversity in the areas of thought, personality, problem solving and skills. If we do all of this work to get people with differences, only to insist they shape themselves to be replaceable cogs, we are doing both them and our teams a huge disservice.


The closing keynote with Zeger van Hese, titled "Everything is Connected - Exploring Diversity, Innovation & Leaership" started with an explanation of Myers-Briggs types, and Zeger's personal distribution and what those things mean. The inspiration for the theme looks at a tribar of  diversity, leadership and innovation, and the interconnectedness of those aspects.

There's linguistic diversity, which introduces some interesting terms I had never heard before, and how those terms are unique to their cultures (it's cool to see there's a term in Japanese that's a single word that stands for "the condition of buying books but not reading them so that they pile up in a great big stack on your desk". I needed that sentence. Japanese have one word ;). A variety of responses are needed to be able to meet the demands of an organization. Hiring variety allows for the ability to get people from a variety of backgrounds to make a diverse and creative team. If we focus too hard on getting people like us, or that think and act like us, we should not be surprised when what we get is a generic and bland response, because everyone basically is the same.

Randomness and serendipity is also important, since there is an interesting variety of options that can take place when we are ope to allowing that randomness to take place. However, don't misconstrue randomness and serendipity with just winging it. Preparation and readiness is necessary for randomness and seendipity to be effective. It takes time and effort to be prepared, but when yo uare ready for anything, then anything can make its appearance ;).

Ask yourself, are you creative? Most people will probably say they aren't (about a third of the room said they were). I think we short change ourselves. Many of us are able to be very creative at times, but we assume that, unless we are insanely creative and productive all the time, that we do not have true creativity. That's a false equivalence. Being creative is a state of mind and action based on stimulus and a willingness to respond. We equate creativity with quality, and frankly, most of us do not start out with quality creations. We frequently suck when we start something. I appreciate the long time readers of my blog, and those who think I write cool things. as of today, I've put 926 posts up on this blog, and that does not include the dozens of entries that I deleted mid way, and perhaps the hundreds of entries that never made it into a post at all because I thought it was junk.

Creativity is not just a spark of inspiration. Often is starts that way, but if you haven't put in the time and energy to develop the skills necessary to use it, it won't matter very much. Don't take that to be to pessimistic, it's not meant to be. What I mean with that is that many of our efforts are going to be less than masterpieces. Of those 926 posts, half of them are below average (by definition ;) ). Half of them fall below the median of quality as well. Still, if you were to ask ten different people which of my posts deserve to be above or below that median line, you might get a broad variety of answers. that's because what is good matters to the person who is consuming the data, not the person writing it. You may think the Princess Bride is one of the greatest movies of all time. Someone else may decide it is a completely corny movie. What matters is not what other people think, but what you think. Your desires and motivations will help decide how you feel bout certain things.

Leadership needs to be able to handle the diversity mix for it to be effective. Again, leadership is something that we as a general population seem to have a problem it. Part o this is the fact that Leadership is sold as this insanely altruistic or hyper-focused attitude. We automatically think that the leader is the alpha dog, and that's not necessarily the case. Everyone has a bit of leadership ability in them, and under the right circumstances, those leadership opportunities can be sought out and applied without feeling one has to be a general or a manager/director.

Sometimes we suffer rom the status quo bias, where we tend to struggle with reconciling "new" with "useful". We may miss opportunities because we do not see the benefit. If we were more honest,  we might even say that the opportunity scares us. we can't turn off that fear, but perhaps we can channel those feelings more effectively. ure, there will be abrasion, there may even be genuine fear and frustration, but by embracing and making room for that ambiguity, we can let real creativity develop.

With that the official program ends, and an announcement of Eurostar to be held next November in Maastricht, Netherlands has been made. My congratulations to Ruud Teunissen for being the conference chair this year, and my thanks and gratitude to Paul Gerrard for a Yeoman's work at being conference chair this year. To Paul and the staff that helped put on EuroSTAR this year, may I say "well done and thank you".


You thought I would be finished, but you'd be wrong. There was an after conference session about "Programming for Testers" that I found to compelling to pass up.

Anyone familiar with my blog knows I have answered this question many times. Do testers need to be professional programmers? No. Is it advantageous? Absolutely! Do I code? Enough to be dangerous. Am I a professional coder? Nope, but I strive to learn enough to be both dangerous and effective.

Therefore, I'm glad to put myself into a play time situation to see how the two instructors want to cover this topic. More than just write "Hello World", we're actually going to control some external devices, such as a robotic arm using a Raspberri Pi device.

A hop, skip and a jump, a Python distribution download and a Geany download later, I am ready to go... I think ;).

Python installed, Geany installed.

A few lines of basic code, a compile and an execute, and here we are:

wow, this feels like "Learn Ruby the Hard Way" all over again (LOL!)

Create two numbers? Sure, we can do that :):

declare two numbers and print the two numbers. Whee!!!

And now let's multiple two numbers and print out the string:

it works, and it feels good :)!!!

Something a little more interesting? OK, here's a loop.

And here's a comparison:

Moving along very fast, here's a Raspberry Pi, running a robotic arm, controlled by a Wii Controller. Yep, a bit of a code jump, but not too insane ;).

And with that, I'm off to tour the Guinness Storehouse... cue the jokes about the guy who doesn't drink touring a world famous beer factory. It's OK, I'm used to it ;). Again, thanks very much for playing along, it's been a fun several days. Dublin, you've been fantastic, Eurostar, you have likewise been amazing. Happy Thanksgiving to all of my U.S. friends, and to all else, enjoy the rest of your Thursday :).

Making testers very happy... OK, who wants mine (LOL!)?!

Wednesday, November 26, 2014

Green Thoughts: Day Two at #esconfs

It really puts into perspective how much of a time difference eight hours is. I was mostly good yesterday, but this morning came way too fast, and I am definitely feeling the time change. A bit of a foggy morning today, but a brisk walk from the Maldron Pearse and a snack on a peanut bar, and I'm ready to go :).

Convention Center Dublin, aka Where the Action is :)
The first keynote for today is being given by Isabel Evans and its titled "Restore to Factory Settings", or to put simply, when a change program goes wrong. We always want to think that making changes will always be positive. the truth is, there are some strange things that happen, and its entirely possible that we might find ourselves in situations we never considered. Isabel works with Dolphin Systems, who is associated with Accessibility issues (hey, something in my wheelhouse, awesome :) ).

The initial problems were related to quality issues, and their idea was that improved testing would solve these problems. Isabel figured "30 years of testing, sure I can do this, but there's not just testing issues here". Starting with issue #1, improve testing. First, there were no dedicated testers. Testing was done by whoever would be able to do testing. Seemed an obvious first step was to do some defect recognition, and develop skills to help them discover defects and actually talk about them.

Isabel suggested that she sit with the developers and work with them, and even that was a request that was at first a difficult transition. She had to fight for that position, but ultimately they decided it made sense to work with that arrangement. By recruiting some of the support team and getting others involved, they were able to put together a test team.

With a variety of initiatives, they were able to improve defect recognition, and requirements acquisition also improves. Sounds great, right? well, the reality is that the discovery of issues was actually adding to the time to release. They improved the testing, which identified more bugs, which added a greater workload, which adds pressure to the release schedule, which means more mistakes are made, and more bugs are introduced. Now, for those of us who are familiar with testing, this is very logical, but the point Isabel is making is that testing alone, and focusing on a greater emphasis on testing will not automatically mean better quality. In fact, it has a significant chance of making the issues worse at first, because they are now being shown the light of day.

The talk title "Restore to Factory Settings" is that, when things get tough, the natural reaction is to go back to doing what everyone always did. there are enthusiastic adopters, people against the whole idea, and then there are waverers in the middle. The waverers are the ones who hold the power. They will revert back to their SOP when things get tough. Even the enthusiastic adopters, if they are not encouraged, will revert back to SOP. The people against will go back to the old ways the second they get a chance. Management, meanwhile, is getting agitated that this big movement to change everything is not getting traction. Sounds absurd, but it happens all the time, and I'm sure we've all experienced this in one form or another.

The key takeaway that Isabel was describing is that changes to testing are often icing on a much thicker and denser cake. Changing testing will not change the underlying culture of a company. It will not change the underlying architecture of a product. Management not being willing to change their approach also adds to the issues. If the rest of the cake is not being dealt with, testing alone will not improve quality. In fact, it's likely in the short term to make quality issues even worse, because now there is clarity of the issues, but no motivation to change the behaviors that brought everyone there.

This all reminds me of the four stages of team development (forming, storming, norming and performing), and the fact that the testing changes fit clearly into the storming stage. If the organization doesn't embrace the changes, then the situation never really gets out of the storming stage, and morale takes perpetual hits. Plans describe what likely won't happen in the future, but we still plan so we have a basis to manage change. Risk management is all about stuff that hasn't happened, but we still need to consider it so we are prepared if it actually does happen. In short, "Say the Same, Do the Same, Get the Same".

Change is hard, and every change in the organization tends to cause disruption. Change programs bring all of the ugly bits to the surface, and the realizations tumble like dominos. To quote Roland Orzabel's "Goodnight Song", nothing ever changes unless there's some pain. As the pain is recognized, the motivation for change becomes more clear. Prioritization takes center stage. Change has a real fighting chance of succeeding.

Ultimately, there is a right time for implementing changes, and not any one thing is going to solve all the problems. Continuous improvement is a great pair of buzzwords, but the process itself is really tricky to implement and, more important, to sustain.


Next up, "Every Software Tester has a PRICE" with Michael Bolton. I've come to this session because I am often curious as to how we present the information we find, and the ways that we can gather that information. Ecery test should have an expected predicted result. Makes sense for checks, but it doesn't really hold up for testing. Testing is more involved, and may lead you to completely different conclusions. Often, the phrase we hear is "I don't have enough information to test". Is that true? It may well be, but the more fundamental question is "where do we get the information we need in the first place?"

Our developers, our company web site, our user guide, our story backlog, our completed stories, our project charter, our specifications, our customers, other products in our particular space, etc. Additionally, the elusive requirements that we are hoping to find to inform us are often not anything that is written down. Tacit knowledge that resides in the heads collectively of our organization is what ultimately makes up the requirements that matter. The tricky part is gathering together all of the miscellaneous parts so that it can be made actionable. Think about it this way. For those of us who have kids, do we know the exact address of our kids schools or where they go for their extra curricular activities? I'm willing to bet most of us don't, but we know how to get there. It only becomes an issue when we have to explain it to someone else. As a tester, we need to consider ourselves that person that needs to consider how to get those addresses for all of those collective kids and where they need to go.

The fact is, there's lots of information that is communicated to us by body language and by inference. Requirements are ultimately "collective tacit knowledge". What we require of a product cannot be completely coded or ever truly known. That doesn't mean that we cannot come close, or get to a reasonable level that will help generate a good enough working model. One of the interesting aspects of the iPhone, for example, is "charisma"... what makes an iPhone an iPhone, and what makes it compelling? Is it its technical capabilities, or does it just "feel good"? How do we capture that charisma as a product element, as a feature to be tested?

One of the best sources of information, and one not talked about very often, is the process of "experimentation". In other words, we develop the requirements by actively testing and experimenting with the product, or with the people responsible for the product. Interviewing individuals associated with the product will help inform what we want to be building (think customer support, customers, manufacturing, sales, marketing,  subject matter experts, senior management, etc.) and our experimenting with their input will give us even more ideas to focus on). We also develop oracles to help us see potential issues (in the sense that an oracle is some mechanism that helps us determine if there is an issue before us). The product itself can inform us of what it could do. We can also do thought experiments of what a product might do.

What this shows us is that there are many sources of information for test ideas and test artifacts in ways that most of us never consider. We place limits on our capabilities that are artificial. So many of our ideas are limited by our own imaginations and our own tenacity. If we really want to get deep on a topic, we are able to do that and do it effectively. Often, though, we suffer not from a lack of imagination, but from a lack of will to use it. So much of what we want to do is dictated by a perceived lack of time, so that we try to limit ourselves to the areas that will be the quickest and most accessible. This is not a bad thing, but it points out or limitations in our efforts. We trade effectiveness for efficiency, and in the process, we cut off so many usable avenues that will help us define and determine how to guide our efforts.

Next up, "How Diversity Challenged me to be Innovative as a Test Team Leader" with Nathalie Rooseboom de Vries - van Delft.

What does diversity really mean? What does it mean to embrace and utilize diversity? What happens when you go from being a team of one as a consultant to wanting to be a team leader an manage people? How can we get fifteen unique and different people to work together and become a single team? What's more, what happens when you have to work with a team and a culture that is ossified in older practices? This is the world Nathalie jumped into. I frankly don't envy her.

One of the biggest benefits of being a consultant is that, after a period of doing a particular job or focus, you can leave, and the focus is temporary, and you don't have to live with the aftermath of the decisions that follow on. When we make a commitment to become part of a team long term, we inherit all of the dysfunction, oddity, and unique factors that the team is built from. The dynamics of each organization is unique, but they tend to have similar variations on a theme. The ultimate goal of an organization is to release a product that makes money. Testers are there to help make sure the product that goes out is of the highest quality possible, but make no mistake, testers do not make a company money (well, they do if you are selling testing services, but generally speaking, they don't). Getting a team on the same page is a challenge, and when you aim to get everyone working together, part of that balance is understanding how to emphasize the strengths of your team mates.

Nathalie uses an example of what she called a "Parent/Adult/Child" metaphor to transactions. The Parent role can have over Positive and over Negative issues. the Parent role can nurture, but it can also be controlling, it can be consoling and yet blaming. the child role is both docile and rebellious, unresponsive and insecure. In some early interactions, there may well be Parent/Child interactions, but the goal is to move away over time to a more Adult/Adult interaction. To get that equality of behavior, sometimes you have to use the parent relationship to get the behavior from the "Child", or if you want to get the Parent to respond differently, the Child needs to use a different technique to get that behavior to manifest.

The ability to challenge members of your team will require different methods. Diversity of the team will make it impossible to use the same technique for every member. They each have unique approaches and unique interests and motivations. One of Nathalie's approaches is to have a jar with lollipops and a question and answer methodology. If you post a question, you get a lollipop. If you answer a question, you get a lollipop, too. The net result is that people realize that they can answer each other's questions. They can learn from each other, and they can improve the overall team's focus by adding to the knowledge of the entire team and getting a little recognition for doing that. She also uses a simple game called "grababall" which has a number of tasks and things that need to be done. The idea is that when you grab a ball, you have a goal inside of the ball to accomplish. If you accomplish the goal, you get a point. At the end of the year, the highest point accrual gets a prize. By working on these small goals and getting points, the team gets engaged, and it become a bit more fun.

Diversity is more than just the makeup of the team, of having different genders, life experiences or ethnic backgrounds. Diversity goes farther. Understanding the ways that your team members are motivated, and the different ways that they can be engaged can give huge benefits to the organization. Take the time to discover how they respond, and what aspects motivate them, then play to those aspects.


Next up, "Introducting Operational Intelligence into Testing" with Albert Witteveen. Albert has had a dual career, where he has spent time in both testing and in operations (specifically in the Telco space). Testers are all familiar with the issues that happen after a product goes live. The delay, the discovery, the finger pointing... yet Operations discovers the problem in a short period of time.

What is the secret? Why do the operations people find things testers don't? It's not as simple as the testers missed stuff (though that was part of the answer), it's also that the operational folks actually utilize the product an manage and monitor the business processes. Operations people have different tools, and have different focuses.
Testers can be a bit myopic at times. If our tests pass, we move on to other things. Small errors may be within the margin of error for us. In Ops, the errors need to be addressed and considered. Operations doesn't have an expected result, they are driven by the errors and the issues. In the Ops world, "every error counts".

Operations managers have log entries and other issues that are reported. With that, they work backwards to help get the systems to tell them where the issues are occurring. In short, logs are a huge resource, and few testers are tapping them for their full value.

So what does this mean? Does it mean we need Operations people on the testing team? Actually, that's not a bad idea. If possible, have a liaison working with the testers. If that's not a reasonable option, then have the operations people teach/train the testers how to use logs and look for issues.

Sharing the tools that operations uses for monitoring and examining the systems would go a long way to be able to see what is happening to the servers with a real load and a real analytics of what is happening in the systems over time.  If there is any one silver bullet I can see from doing Ops level monitoring and testing, it's that we can objectively see the issues, and we can see them as they actually happen, not just when we want to see it happen.


I'm in Adam Knight's talk "Big Data, Small Sprint". What is big data, other than the buzzword that talks about storing a lot of details and records? Who three years ago even really knew what "big data" was? When you talk about big data, you are talking about large bulk load data. Adam's product specifically deals with database archiving.

This model deals with tens of millions of records, dozens of partitions and low frequency ingesting of data (perhaps once a week). their new challenge was to handle millions of records per hour, with tens of thousands of partitions. The ability to work within Agile and targeting the specific use cases of this customer, they were able to deliver the basic building blocks to this customer within one sprint. Now imagine storing tens of billions of records each day (I'm trying to, really, and it's a bit of a struggle). Adam showed a picture of an elephant, then a Titanosaurus, and then the Death Star. This is not meant to represent the size of increase for the records, but the headaches that testers are now dealing with.

In a big data system, can we consider the Individual? Yes, but we cannot effectively test every individual uniquely. Can the data be manipulated? Yes, but it needs to be done in a different way. We also can't manage an entire dataset on a single machine.  we an back up a system, but the back up will be too big for testing purposes. Is big data too big to wrap ones head around? It requires a different order of magnitude to discuss (think moving from kilometers or miles to astronomical units or light years to describe distances in space).

OK, so this stuff is big. we get that now. But how can we test something this big?  We start by changing our perspective, and we shift from focusing on every record to focusing on the structures and how they are populated with representative data (from records to partitions, from data to metadata, from single databases to clusters). Queries would not be made where they would pull a row from every conceivable table. Instead, we'd be looking at pulling representational data over multiple partitions. Testers working on big data projects need to develop special skills beyond classic testing. There is a multi-skill request, but the idea of getting multiple testers that have all of the skills needed in one person is highly unlikely. Adam discusses the idea of developing the people on the test team to strive to be "T" shaped.  A T-shaped tester would have many broad but rudimentary or good  test skills, as well as a few core competencies that they would know very deeply. By combining complementary T-shaped testers, you can make for a fully functional square shaped team.

Adam mentioned using Ganglia as a way to monitor a cluster of machines (there's that word again ;) ) so that the data, the logs and other detail can be examined in Semi Real times. To be frank, my systems don't come anywhere close to these levels of magnitude, but we still have a fairly extensive amount of data, and these approached are interesting, to say the least :).


I promised my friend Jokin Aspiazu that I would give him a testing challenge while we were here at EuroStar. Jokin humored me for the better part of an hour and a half showing me how to tackle an issue in the EuroSTAR test lab. I asked him to evaluate a time tracking application and either sell me on its implementation or convince me it was not shippable, and to find a scenario significant enough to kill the project.

He succeeded :).

Of course, this app is designed to be a TestLab app to be puzzled through and with, but I asked him to look beyond the requirements for the Test Lab's challenge and look at the product holistically, and to give me a yes/no in a limited amount of time (all the while articulating his reasoning as he was doing it, which I'm sure must have been mildly annoying ;) ).

With that, I determined Jokin earned advancement as a Brown Belt in the Miagi-do School of Software Testing.  For those of you here, high five him and buy him a drink when you see him, he's earned it!

A last minute substitution caused me to jump onto another talk, in this case "The Silent Assassins" by Geoff Thompson. These are the mistakes that can kill even the best planned projects.

Silent Assassin #1: Focus on Speed, not on Quality. Think of the idea of a production floor taking up more space to fix problems coming off the line than is allocated to actually developing new product well.

Silent Assassin #2: The Blindness to the True Cost of Quality. Supporting and maintaining the software costs a lot more than putting it together initially. Consider the amount of money it will take to maintain a system.

Silent Assassin #3: Go by Feel, Not by Facts. Metrics can be abused, and we can measure all sorts of worthless stuff, but data and actual deliverables are real things, and therefore, we need to make sure we have the facts on our side to say if we are going to be able to deliver a product on time. In short, we don't know what we don't know, so get our relevant facts in order.

Sinet Assassin #4: Kicking Off a Project Before the Business is Ready. Do our customers actually understand what they will be getting? It's not enough for us to deliver what "we" think the appropriate solution is, the customers need to have a say,and if we don't give them a say, the adoption may be minimal (opr even non-existent). Which leads to...

Silent Assassin #5: Lack of Buy-in From Users. Insufficient preparation, a lack of training, no demonstration of new features and methods, will likewise kill the adoption of a project with users.

Silent Assassin #6: Faulty Design. Software design defects compound as they go. The later a fundamental design issue, the harder it will be to fix, and in many cases, the problems will be exponentially more difficult to fix.

OK, that's all great, but what can we actually do about it? To disarm the assassins, you need to approach the areas that these problems fall into. The first area is processes. It's the way you do the work and the auto-pilot aspects of the work we do. The next is People. Getting people on the teams to work with each other. Getting buy-in from customers and communicating regularly, and taking the people into consideration of our efforts. The last area is tools, and its listed last because we often reach for the tool first, but if we haven't figured out the other two first, the tools are not going to be effective (or at least not as effective as they actually could be). Focus on effectiveness first, then shoot for efficiency.

Shift Left & Compress: put a clear focus to deliver the highest quality solution to customers at the lowest possible cost point. In my vernacular, this comes down to "review the work we do and get to the party early". Focus on the root causes of issues, and actually do something to stop the issues from happening. The Compress point is to do it early, prioritize up front, and spend your efforts in the most meaningful efforts. easy to say, often really difficult to do correctly. Again, the organization as a whole needs to buy in to this for it to be effective. This may also... actually, scratch that, it will need to have investments in time, money, energy, emotion and commitment to get past these assassins. These are difficult issues, and they are costly ones, but tackling it head on may give you a leg up on delivering a product that will be much less costly to maintain later. The money will be spent. The question is how and when ;).


Yes, this happened, and yes, it was glorious :)!!!

...and as an added bonus, Smartbear sings a Tester's Lament to the score of Frozen's "Let It Go" :)

Wednesday's closing Keynote is with Julian Harty; the topic is "Software Talks - Are You Listening?"

First question... why bother? Don't we know what we need to do? Of course we do, or so we think. However, I totally relate to the fact that software tells us a lot of things. It has its own language, and without listening, we can do many things that are counter to our best intentions.

The first way that our software talks to us is through our user base and through their interactions. If we remove features they like, especially in the app world, the software will tell us through lower ratings (perhaps much lower ratings). Analytics can help us, but yet again, there's much more we can learn earlier (and actually do something about) than do it later when it's been released.

Logs are our friend, and in many ways, the logs are the most loquacious artifact of all. So much information is available and most testers don't avail themselves of it, and that is if they look at them at all. analytics can be filtered from devices, churned into a number of different breakdowns and then we try to understand what is happening in real time. The information we gather can be helpful, but we need to develop insights from it. we want to gather design events, implementation events, field test data, evaluations, things that will tell us who is using what, when and where. A/B Testing fits very well in this space. We can see how one group of users reacts compared to another group. We can gauge precision and accuracy, so long as we don't conflate the two automatically. It's entirely possible that we can be incredibly accurate, but we are missing the target completely.

There are darks sides to analytics, too. One cardinal rule is "Do No Harm", so your app should not do things that would have negative effects (such as having a flashlight app track your every move while it is in use and upload that location data). We can look at the number of downloads, the number of crashes and the percentage of users that use a particular revision.  If we see that a particular OS is in significant use, and that OS has a number of crashes, we can deduce the priority of working on that issue and its effect on a large population of users.

The key takeaway is that we can learn a lot about our users and what they do, what they don't do and what they really wish they could do. We leave a lot on the table when we don't take advantage of what the code tells us... so lest's make sure we are listening ;).

Well, that's it for today, not counting socializing, schmoozing and dinner. Hope you had fun following along today, and I'll see you again tomorrow morning.

Tuesday, November 25, 2014

So Many Shades of Green: Day One at #esconfs

A first glimpse of Ireland from the air and in the airport.
Hello all and welcome to another Live Blogging extravaganza from yours truly. It's been a while since I've done one of these, but I am happy to be back in the saddle one again. My journey has already been an eventful one, starting with a flight from San Francisco to Washington DC, changing to another plane, then changing to another plane because an oven was on the fritz int the plane, and then a trans-Atlantic flight which literally just ended about an hour ago. A trip through Immigration, a hop on the Green Bus, and now I'm sitting in a large auditorium with Paul Gerrard chatting about what to expect over the next three days.


The first talk I am witnessing is courtesy of Andy Stafford-Clark and it's about "The Internet of Things's Coming!" What is the Internet of Things? It's the interconnection of devices and information details that are not necessarily associated with the devices we typically refer to as Internet enabled devices. We've been focusing on the past three decades talking about computers, and phones and tablets and communication devices. Those are things that we are now well used to seeing, but what about the lights in our house? Our refrigerator? Our home thermostat? The train station display. Many of these devices are using homebrew tools (think Arduino/Raspberry Pi or other devices that can be created to control or set up servers that we can query, modify and update. To some, this is the epitome of nerdy, and for others, it's genuine and valuable information that helps us make decisions about what we can do (hmmm, that sounds familiar :) ). The goal of this initial and primitive "Internet of Things" as it stands today is that it is more of a fun curiosity for front minded nerdy types, but the promise of what it can offer is very compelling. What if we had the ability of actually putting together a clear understanding of how we use energy, as an example? We know we use water, electricity and gas for various purposes, but do we really know when we are using water? What really is causing the largest percentage of usage? Is it family showers? Laundry? Me changing out the water of the fish tanks? garden maintenance? How cool would it be if I could get an hour to hour breakdown of water usage each day and drill down to see various times? That is a perfect adaptation of the Internet of Things, if we choose to set it up and look at it.

Shall we get even more crazy? How about making a mousetrap that actually tells you when it has caught a mouse? Sound weird? well, with an Arduino board and a mechanical mouse trap, it's doable and Andy and his family did it.

All interesting insights, and novel uses, but how will this engage and interest the average everyday user, and when will we see the move away from the "nerdy" crowd towards the everyday people to use them. More to the point, how will we actually be able to test this stuff? Embedded systems knowledge will certainly help, but in many ways, Arduino?Raspbery Pi gives those who want to work in these areas some of the best up front training imaginable. So many of the systems use simple languages like Scratch or JavaScript (well, simple by comparison ;) ), and there will certainly need to be a change of focus. Awareness and familiarity of working with circuit boards helps, but the interfaces are not *that* foreign, thank goodness :). Some interesting issues sill need to be considered, such a how to power all of these devices, and utilizing ways to limit how they are powered (the goal is to make these options available without adding a large additional power load. Much of these devices are being positioned as solutions for helping people save power, so that's a creative challenge to consider. Additionally, there's the question of how to simulate thousands of devices running. How will we do that? How will we test that? These are questions that the next few years will probably start to provide answers for us. No matter how you look at it, this will not be boring ;).


Next up, Adapting Automation to the Available Workforce with Colm Harrington. This is a topic that has long interested me and has likewise vexed me in a variety of workplaces. Colm started with an anecdote referring to Einstein and examinations he was giving his students. They were the same questions, and when called on it, he replied "the questions are the same, but the answers have changed". Automation has changed in the last several years as well. The commercial tools have lost a lot of ground. WebDriver is currently king. The needs for automation are extending, and manual testers who exclusively do just manual testing are becoming more and more rare. All of us are doing some level of automation, but not all of us are seasoned programmers and software developers. We need to do a better job of bringing more people in the organization to be able to use and modify automation to be useful for more of the organization. It's a great promise and a wonderful goal, but how do we bring those to the table who do not already have a strong automation background, and more to the point, can we get the software out on time, under or at budget, without embarrassing errors getting into the hands of our customers? Automation is secondary to that, but still very important.

Colm's goal is not to have people get too detailed in the code, which makes sense with the topic. The goal is not to force testers to write automation, but to encourage them to get involved in a meaningful way and at the level they can be comfortable with. Automation can cover a lot of ground, but for me, the biggest issue is the tedious stuff of setup, population and traversal. Automation that addresses that area alone makes me very happy. Yes, it takes time to set a lot of this stuff up, but at least we have the ability of setting everything up from start to finish so I can get more deeply into the corner areas. When we have to set up everything manually, by the time we get to the places that are interesting, we end up exhausted, and less likely to find interesting things. To that end, automation can be done that doesn't require the programmer have an in depth knowledge of all the internals. Instead, we can be engagerd in focusing on the traversal steps we know w do all the time.

One of the biggest challenges that organizations have is that they do not have the ability or the time to take a full team and train them from scratch. However, if the team has taken the time to implement a framework that is easily modified, or that allows individuals on the team to get some quick wins, that will definitely help speed the success of individuals making their way in getting involved with automation. Using a Domain Specific Language or API, many of the steps can be compartmentalized so that the whole team can communicate with the same language. Will the toolsmiths have an advantage? Of course they will, but they will also be able to make a system that all of the participants will be able to leverage (think of Cucumber and the ability to write statements that are well understood by everyone). When the testers write the tests and cover various test cases, the testers knowledge is being used in a way that is most effective, with the programmers able to help fill in the blanks for the testers to be better able to focus on test design and implementation, rather than trying to wrench the tool to work for their benefit.

Some of the best ways to help make it possible for the testers and others to be effective is to keep things simple, consistent and intuitive. Keep data and test scenarios as clear from each other as possible, do the best you can to encourage a common language and linguistics between the code and the test implementations (use labels and methods that make sense by how they are names and what they actually do), and keep tests as atomic as possible (one test from beginning to end, in as few steps as necessary to accomplish the goal. Additionally, a key consideration is to balance the ability of the tests nd methods to be humane over minimal. Refactoring to where the intention is obfuscated is much less helpful as compared to allowing a little more verbosity to give all the participants a clear understanding of what's happening. Also, use the option to create soft assertions, which allow the user to check 50 different fields, notice the one place it fails, an inform at the end of the test rather than stopping cold at the first error it discovers with a hard assert.

Other important considerations: don't make something reusable if all you are doing is adding to its reuselessness. Let the client code shape the API, and don't have the API be set in stone. Dev's and QA need to work together, in either proximity or communication. If you can sit together, sit together, or share screens and talk together in the same time and space, if not the same proximity.


Next up is Rikard Edgren and "Trying to Teach Testing Skills & Judgment". This particular topic is near and dear to my heart for a variety of reasons, the most important reason being the fact that my daughter has started the process of learning how to write code with me. More than teach her how to code, though, I want to teach her how to test, and more to the point, teach her how to test effectively and with the ability to learn about what is important, not just doing the busywork associated with general testing. Rickard's model he is describing is a 1-2 year education arrangement, with internships and other opportunities to actually get into real situations. Rickard's approach and philosophy is as follows:

- Motivation is key
- Not for the money
- It's not about us
- Encourage new ideas
- Don't be afraid

Rickard mentions the value of focusing on Tacit Skills and Judgment, including asking good questions, applying critical thinking, understanding what is important to the customers, learning quickly, looking at a variety of perspectives, utilizing effective test strategies, and looking for those opportunities to "catch lightning in a bottle" from time to time (i.e. serendipity) and of course, knowing when good enough actually is ;).

Rickard  shred a story where he was working with a programmer who decided to try to be a tester and how, while he could see the problems left and right, the programmer didn't necessarily see the issue, or made assumptions based on how the code was being used. the key was, the tester wanted to see the problems. The programmers often want to get the work finished and of their plates (I totally understand this, and believe me, I make the same excuses when I am the one writing the code). This is also why I find it imperative that I have someone else test my code that I write, and not be afraid to tell me my creation is ugly (or at the very least, could be substantially improved ;) ).

Critical thinking isn't just that we question everything. We need to be discerning in the things we question. Start with "What if..." to get the ball rolling. This will help the tester start thinking creatively and get into unique areas they might not automatically consider. Also, be aware of the biases that enter you purview (and don't ever say you don't have biases, everyone does ;) ).

Everyone thinks differently, and the ability as a teach to be able to explain things in a variety of ways is critical. Likewise, we want to encourage those we are teaching to try a variety of things, even if the attempts are not successful or lead to frustration. we need to step back, regroup and give them a chance to look at what they did well, where they could improve, and how they can get the most out of the experiences. There will be theory and hard topics, and those are important, but always couch the concepts in practical uses. The names are not essential, the use and the understanding of how things work does (well, the names are seriously helpful to make sure that we do what we need to and can communicate effectively, but focus on what is being done more so that what it is called, at least at first. Once they get what is happening, the names will make sense ;) ).

Rikard has a paper that covers this topic. I'll reference it as soon as I get time to get to the link and update this stream of consciousness. Oh, and lookie lookie... green and yellow cards to manage question flow... now where have I seen that before ;)?


Next up, the closing keynote for Tuesday with Rob Lambert about "Continuous Delivery and DevOps: Moving from Staged To Pervasive Testing". I've often heard of this mystical world of DevOps, I've even heard of Continuous Deployment and heard rumors of people doing it. We do pretty well where we are, but we don't currently have a full scale Continuous Delivery system in place.  Still, there is a sense of wonder and appreciation whenever I hear about this in practice.

Rob spent the first part of the talk discussing what many of us know all too well, the long slog march of staged development, testing and release. Don't get me wrong, I am not a fan of this approach at all (too many years suffering through it), especially because, at the tail end, they would pull in everyone humanly possible to test a release (and ultimately, a condemnation for why software testing is ineffective, slow and boring). Yet ironically, the next project gets run exactly the same way.

This brought the big question to the fore; "why do we keep doing these massive, slow running releases?" When the customer needs change, we need to change with them, and big cumbersome releases don't allow for that. Major releases also require a lot of testing of a lot of code at one time, and that invariably means slow, cumbersome, and most likely not fully covered. Releasing in smaller and more frequent chunks means that less code, has to go out, less overall thrashing takes place, and the feedback loop is much tighter.

How to do that? Rob's Company chose to do the following:

- Adopt Agile
- Prioritize work
- Bring Dev and Ops together (DevOps, get it ;)?)
- Everyone tests. Testing all the time.
- The team needs to become one with the data, and understand what the servers are telling us about our apps and services

They removed testers from the center of the team. Note, they don't remove testing from the center, in fact that's the very switch they made. Testing always happens, and the programmers get into it as well. This goes beyond Test Driven Development. It means that automation is used to verify tests where possible, and a more aggressive approach to canning as many tests as possible, with a progressive march to get more coverage and more tests in place with each story. this is very familiar to what we do on my team. the automated tests are the things we want to run every time we do a build, so we emphasize getting those tests reliable, understandable, and easy to modify if needs be that we need to modify them. Ideally, the idea is to automate as much of the drudgery as we can so that we have fresh eyes and fresh energy to look at actual new features and learning what those new features actually do.

Cycle times vary, and each organization can modify and tweak the cycle times as they choose. If weekly is the shipping schedule you want to use, then your cycle time needs to be somewhere between four and five days. Dog fooding (or pre-production) is a life that we understand very well. It helps us se the real performance and actual workflows and how they are processed, and the good bad and ugly that surrounds them. Rob emphasizes exploratory testing be used along with the focus on automation, and an emphasis on the testing that is most critical. The key to success is a focus on continuous improvement and questioning the effectiveness of what you are doing. there will be political battles, and often, there will be issues with people rather than issues with technology or process. Additionally, everyone knows how to test, but not everyone knows how to test with relevance. Remove the drudgery where you can, so that opening the testers eyes and having fresh energy to tackle real and important problems is emphasized. If "anyone can test", then examine the tests that anyone can do, then ask critically if that testing is providing value.


More to come, stay tuned :)!!!