Well, this should be interesting :).
OK, I'm poking a little fun here because I know the presenter quite well. I met Matt Griscom at a Lean Coffee event in Seattle when I was speaking at ALM Forum in 2014. At the time he was talking about this idea of "MetaAutomation" and asked if I would be willing to help review the book he was working on. I said sure, and now four years later he's on his Third Edition of the book and I've been interested in seeing it develop over the years.
So first, let's get to the first premise. Test Automation is broken? I don't necessarily disagree, I definitely think that there's a lot of truly inefficient automation, but Matt's going farther and saying that it's genuinely broken. We write tests that don't tell us anything, we see a green light and we move forward and we don't even really know if the automation we are doing is even doing what we intend it to do. We definitely notice when it's broken, or the test fails, but how many of our tests that are passing are actually doing anything worth talking about?
The idea is that the code includes self-documenting checks that show where issues are happening and communicate at each level. We shouldn't look at these as being unit tests, as they go into error checking at an atomic level where Meta Automation might be overkill but outside of atomic unit tests, it's meant to be useful to everyone from Business Analysts and Product Owners to software testers and developers. The real promise of Meta Automation, according to Matt, is that QA should be the backbone of communication and Meta Automation helps make that all happen. Bold statements, but to borrow from Foodstuff/Savor's Annie Reese.... "Meta Automation.... what is it?!"
Basically, MetaAutomation comes down to a Pattern Language that can be summed up in the following table (too hard to write and I don't want to mess this up):
|Click on image to see full table in larger type|
Here's a question to consider... "In Quality Automation, what would you like your software team to do better?" You have one minute. No, seriously, he asked that question, set a timer and asked us to talk amongst ourselves ;). A tells B, then B tells A.
What did we come up with?
It would be nice to improve visibility on tests that don't run due to gating conditions.
It would be nice to improve the understanding of what tests are actually doing and how they do it.
It would be good to get a handle on the external dependencies that may mess up our tests.
It would be great if the testers and developers could talk to each other sooner.
Granted, this is a tough topic to get to the bottom of in a blog post of a single session, but hey, if you are interested in hearing more, check out http://metaautomation.net/ for more details.
Hi Michael! Thanks for the mention and the photos (although darn I should lose some weight).
I guess I wasn't being clear - I'll need to work on that - but "if it doesn't find bugs, don't bother" is the attitude to avoid. There's a great diagram (in my book, and on a TechWell article I'm working on) that shows two different emphases we need to take when measuring and managing quality for the SUT: finding bugs, and verifying requirements. They're both important. It's not just about bugs :) the fast, repeatable, self-documenting automation I'm talking about with the MetaAutomation pattern language is focused on verifying requirements.
Post a Comment