Tuesday, July 30, 2013
Book Review: The "A" Word
The last time I reviewed a book by Alan Page, it was "How We Test Software at Microsoft". That "review" actually turned into a full synopsis of the book, over multiple posts, and Alan joked that I had provided "the most exhaustive and complete book review of all time". He also said that, should he write another book in the future, I could have a copy free of charge. Unfortunately, or fortunately, depending on how you look at it, I couldn't take him up on that, since the proceeds for this particular book are going to Cancer Research, and I've lost too many friends to Cancer. Therefore, I guess I'll have to wait for his next book to be published to accept that freebie. I gladly paid for this one.
So what's this new book of Alan's I'm talking about? It's called "The "A" Word": Under the Covers of Test Automation". As you might guess, the central theme of the book is... "automation". Not how to do automation. Not tools used in automation. Much higher, conceptual discussions about automation. Where are we doing it right? Where are we missing the mark? Why do so many test automation projects and initiatives fail? In short, this is a collection of short essays, most of which are already available on Alan's blog. The benefit of having them here is that they are structured to flow into one another progressively. Alan is aiming this book for those who are interested in having a discussion about what, how, and why we think about automation the way we do, and sharing his own experiences and philosophy to that regard.
One of the key themes that will be obvious in just the first few chapters is that test automation is abused and misused. Automation is not a time saver, and automation does not replace manual testing. If these are how we are conditioned to think about automation, and we believe these two statements, Alan want to make clear that we can, and must, do better than that.
Two thoughts that stick out in the first few pages, and beautifully encapsulate Alan's position is as follows:
Humans fail when they don’t use automation to solve problems impossible or impractical for manual efforts.
Automation fails when it tries to do or verify something that’s more suited for a human evaluation.
Alan makes a very good case that automation should be used (needs to be used) to take care of "the boring parts", meaning the repetitive steps that, if you were just to make a simple script to encapsulate five or so keyboard commands, you could get on with doing real stuff that matters, instead of wasting time with needless repetition. The problem is that, for many of us, that mindset carries over to all of our automation efforts, and really, let's do better.
One of my favorite chapters is "It's (probably) a Design Problem", and here Alan makes a great case as to why so many automation initiatives fail. This section is focused on why GUI automation is often a bad idea (note, often, not always) and lays out the case for where most of us fall short. While this is aimed at the shortcomings of GUI testing, the advice here is excellent for any automation project.
Alan's blogging style comes through on many of these posts. If you've ever heard Alan speak, every chapter rings with his voice and his mannerisms. It makes every section feel authentic, relevant and honest. He pokes fun at bad practices. He pokes fun at himself, but when he's in earnest, he's sharp, direct, and focused. He pulls no punches, and doesn't couch things in soft terms. For a direct example of this, check out "LOL - UR AUTOMASHUN SUCKZ!" No, seriously, do not skip this one. It's wonderful advice about how to get you to test automation that, well, does not suck! As he puts quite nicely in "Exploring Test Automation":
"...test design is far more holistic than thinking through a few sets of user tasks. You need to ask, “what’s really going on here?”, and “what do we really need to know?”. Automating a bunch of user tasks rarely answers those questions."
When I think of test automation, I don’t think of automating user tasks. I think, “How can I use the power of a computer to answer the testing questions I have?”, “How can I use the power of a computer to help me discover what I don’t know I don’t know?”, and “What scenarios do I need to investigate where using a computer is the only practical solution?”.
Alan makes a great case for the fact that test design is the most important part of all this, and this book focuses a lot on test design and understanding the real questions we want to have answered. Running manual tests, getting bored, then getting the computer to run our steps may be helpful in certain cases. It's wonderful for setting up environments, and covering those areas that we know are 100% of the time a royal time suck if we don't use it. Using that as a basis for our test design, though, will leave us sorely lacking in tests that provide us any useful information, or help us learn anything new or interesting. Regression testing is fine, but there's so much more we can do, that we should do, but we will not succeed unless we put some actual thought and effort into up front test design.
At the current writing of this review, this book is fairly brief. It's a total of 58 pages cover to cover. Do not think that means that this is a "small" book. The information inside of this volume is strong, focused and timeless. I like his take on things. I like where he comes from with his advice. I like that he is real and that he doesn't sugar coat things. Test automation done well is HARD. Test design done well is HARD. For those people who need to get a better handle on how to do it, you would do yourself a great service by getting The "A" Word and reading it cover to cover. As an added benefit, it's just straight up a fun read. When we're talking about software test automation... seriously, that's saying a lot!