Wednesday, April 30, 2014

Django 1.7 and… ME (yet another Live Blog)

So this might seem an odd spot, but this has become something of a mission on my part for 2014. I’ve decided that I want to try to become multi-lingual when it comes to web frameworks. We have all sots of interesting frameworks to play with to make web apps and web sites,and Django is the Python centric web framework, and therefore one I want to know more about and get more experience with. Seems a great reason to come into San Francisco and see what the Pythonistas are doing and how they are doing it with Django.

Yes, this is going to be live blogged, and as usual, it may be messy at first. Forgive the stream of consciousness, I promise I’ll clean it up later :).


A bit about our topic this evening (courtesy of Meetup):

Django 1.7 is one of the biggest releases in recent years for Django; several major new features, innumerable smaller improvements, and some big changes to parts of Django that have lain unchanged since before version 1.0. Come and learn about new app loading, system checks, customized select_related, custom lookups, and, of course, migrations. We'll cover both the advantages these new features bring as well as the issues you might have when upgrading from 1.6 or below.


A bit about our presenter this evening (also from Meetup):

Andrew Godwin is a Django core developer, the author of South and the new django.db.migrations framework, and currently works for Eventbrite as a Senior Software Engineer, working on system architecture. He's been using Django since 2007, and has worked on far too many Django websites at this point. In his spare time, he also enjoys flying planes, archery, and cheese.


Lightning Talks 

#1 Randall Degges - Django & Bcrypt

Randall kicked things off right away with a talk about how Django does password hashing and securing of passwords, with the estimated cost o what it takes to crack a password (hint, it's not that hard). If you want to be more security alert, Randall is recommending that we consider using BCrypt. It's been around awhile, and it allows for transparent password upgrading (users update their hash the first time they log in. No muss no fuss :). Sounds kinda cool, to tell the truth, I'm looking forward to playing with it for a bit.

#2 Venkata - Django Rest Framework w/ in-line resource expanding

The second talk discussed a bit on the Django REST framework. Some of the cool methods to handle drop down, pop open and other events were very quickly covered, and some quick details as to what each item can do. A quick discussion with fast flashes of code. I caught some of the details, but I'll be the first to admit, a lot of this flew right past me (gives me a better idea as to areas I need to get a little more familiar with). Granted, this is a lightning talk, so that should be expected, but hey, I pride myself on being able to keep up ;).

#3 Django Meetup Recap

The third lightning talk basically covered a recap of what the Django group has been covering and some quick recaps of what has been discussed in the previous meetups (Ansible, Office Entrance Theme Music, Integrating Django & NoSQL, etc.). Takeaway, if we want resources after the meetups are over, we have a place to go (and I thank you for that :) ).

---

Andrew Godwin's Talk

This seems like a great time to say that I'm relatively new to Django, so a lot of what's being discussed is kind of exciting because it makes me fgeel like I'll be able to get into what's being offered without having to worry about unlearning a lot of things to feel comfortable with the new details. Part of the new code is an update to South (which, as is mentioned above, is something Andrew is intimately involved in).

Details as to how apps are loaded and how to check for and warn programmers about what may happen with an upgrade. Having suffered through a few updates where things worked, then didn't and not having any clue as to why, this is very appealing.

Another new aspect is an adjustable and tunable prefetch option, so that instead of all or nothing, there's a spectrum of choices that can be looked up and help based on context.

A rather ominous slide has flashed across saying "Important Upgrade Notes", and a new detail is that all field classes need to have a deconstruct() option. It's now a required method for all fields. Additionally, initial_data is dead. It's important to have modules use data migration instead. In short, don't automatically assume that older modules that use initial_data will cleanly work. I will take Andrew's word on that ;).

So what's coming up in Django 1.8? Definitely improvements in interactions with PostGreSQL, as well as migrations for contributing apps. But that's getting a bit ahead of the race at the moment. Expect Django 1.7 to hit the scene around May 15th, give or take a few days. Again, I will take Andrew's word on that ;).

---

There's no question, I feel a little bit like a fish out of water, and frankly, that's great! This reminds me well that there is so much I need to learn, especially if my goal of becoming a technical tester is going to advance farther than just wishful thinking or following pre-written recipes. It's not enough to just "know a framework" or "know my framework".

As was aptly demonstrated to me a year and a half ago, I spent a lot of time in the Rails stack, and then I went to work with a company that didn't user Rails at all. Did that mean all that time and learning was wasted? Of course not. It did give me a way to look at how frameworks are constructed and how they interact. I'm thinking of it like learning Spanish when I was younger. Don't get me wrong, I'm not great shakes when it comes to Spanish, but I understand a fair amount, and can follow along in many conversations. What's really cool is that that gives me an add on benefit that I can follow a little bit in both French and Italian as well, since they are closely related. that's how I feel about learning a variety of web frameworks. The more of them I learn, the easier it will be to move between them, and to understand the challenge that they all face.

In any event, this was an interesting and whirlwind tour of some new stuff happening in Django, and I plan to come back and learn more, with an eye to understanding more next time than I did today. Frankly, that shouldn't be too hard to accomplish ;).


Thanks for hanging out with me. Have a good rest of the evening, wherever you are.


Friday, April 25, 2014

TECHNICAL TESTER FRIDAY - Getting UnGraphical with lynx and grep

Use lynx --dump to retrieve the contents of your Web site. Just hardcode all the page URLs. Redirect all the content to flat files, then use grep to look for patterns in your content. Start by looking for mistakes you commonly make. Save your greps in a file.

Wow, now this brings back some memories :). 

I first loaded up a lynx browser back in 1993, and this was my introduction to what the non-graphical World Wide Web looked like. Truth be told, I fairly quickly abandoned lynx as an everyday platform when both NCSA Mosaic and the first version of Netscape came out, but there is indeed a value to using lynx. It’s a nice tool to add to accessibility tests, so that you can see what your super pretty graphical page looks to those who don’t have that option. For those curious... it looks like this (well, mine looks like this):

Yep, that's what the Web looked like in 1993. Cool, huh?


lynx —dump does exactly what it sounds like.

Here’s an example from my own little site project:

lynx —dump http://127.0.0.1/web/orchestra/index.php

This prints the following to the screen:

Adding a redirect tag (‘>’) puts it in a file for us. repeat a bunch of times, and you can pull down details on every page in your site.

OK, cool, that’s interesting, but what does that do for us? It allows us to go through and pull out data that we’d want to analyze. Granted, the site as it exists right now isn’t all that spectacular, but it does give us a basis for how we can construct some simple greps. 

For those not familiar with this tool, "grep" is an old UNIX standby. The term comes from the syntax of the “ed” editor, and the command that was used was g/re/p (or “globally search for a regular expression and print it to stdout”).  Those of you with Windows machines can download Grep for Windows at http://gnuwin32.sourceforge.net/packages/grep.htm, or you can find a variety of fun an interesting versions. For me, since my system is in a virtual environment, I'm just going to save the files to my shared folder space and play with grep on my Mac :).

The main benefit to using grep is to look for things that show up in your pages that you may find interesting, or things that might be errors. Searching for basic strings in file names can show a lot of interesting details in the content of the pages. As a quick set of examples that we can do using grep, I recommend poking around on this page for 15 command examples in ways you can use grep to get interesting data.

Once you find a few greps that you find useful, it's a good idea to save those in a file so that you can run them over and over again as you add content to the site and get more information to mine from your site.  

This is meant to be a really basic first step in getting into the details of what your pages show and help get you away from using the browser as a main interaction source. Yes, there's a lot that can be done just with the files and the content that is in them. How you choose to look at them and what interesting details they show will be my focus for next week.

Wednesday, April 23, 2014

A Leaner and Cleaner Codecademy

A couple of years ago, I posted that I was excited to see the initiative that would become Codecademy get of the ground. At the time, it was limited in what it offered. It featured a course on JavaScript and some other small project ideas, and after a little poking around, I went on to other things. A year later I came back and saw that there was some new material, this time on Ruby and Python. A little more poking and then I went on to do other things.

I made a commitment to roll through Noah Sussman's "ways to become a more technical tester", which I follow up on each Friday in my TECHNICAL TESTER FRIDAY posts.. In that process, I decided it would be good to have a place that novice testers could go and learn some fundamentals about web programming. With that, I decided to go and give Codecademy another look, and I'm glad that I did.

For starters, Codecademy has refreshed everything on the site. They talk about it at length in "Codecademy Reimagined", and I for one am impressed with the level of depth they went into the describe the changes.

It's opened up a number of courses and updated several of their older offerings. The original JavaScript track has been deprecated (but it is still there if you want to work through it), and a new JavaScript track has been put in place. The site has been augmented with a jQuery track, a freshened HTML/CSS track, and updates to Ruby, Python, and PHP tracks as well.

In addition, there are several small project areas that users can practice and make "Codebits" to show what they have learned. Some of the Codebits are already assembled (examples include animating your name, making a solar system model and a simple web site template, as well as open format Codebits that users can share. Additionally, there are also a variety of projects ranging from novice to intermediate and advanced levels so that you caa practice what you are learning.

Another cool section is the API track. Currently, there are 29 API's listed that users can experiment with, and make their applications so that they can interact with the various API's. the offering range from YouTube to Twitter to Evernote, and also show the languages best suited to using that particular API's (JavaScript, Ruby and Python).

So how's the actual learning process? It's pretty solid, to tell the truth. Each track has a variety of initiatives, and a range of lessons and small projects interspersed throughout to keep the participant's attention. The editor can be finicky at times, but usually a page refresh will solve most of the odd problems. One of the nice attributes of having an account and working through the exercises is that your progress is saved. All of the steps from the first lesson to the last are recorded as part of your progress. That means you can go back and see your "cleared" examples and exercises.

Additionally there are Q&A Forums associated with each project, and so far, even when I've been stuck in some places, I've been able to find answers in each of the forums thus far. Participants put time in to answer questions and debate the approaches, and make clear where there is a code misunderstanding or an issue with Codecademy itself (and often, they offer workarounds and report updates that fix those issues). Definitely a great resource.  If I have to be nit-picky, it's the fact that, often, many of the Q&A Forum answers are jumbled together. Though the interface allows you to filter on the particular module and section by name, number and description, it would be really helpful to have a header for each question posted that says what module the question represents. Many do this when they write their reply titles, but having it be a prepended field that's automatically entered would be sweet :).

Overall, I think Codecademy has come a long way from when I first took a look at it about two and a half years ago. They have put a lot of effort into the site and their updates, and it shows. If you are already playing around at Codecademy, you already know everything I've written here. If you haven't been there in awhile, I recommend a return trip. It's really become a nice learning hub. If you have never been there, and are someone who wants to learn how to program front end and back end web apps, etc., and you like the idea of FREE, then seriously, go check the site out and get into a track that interests you. I'd suggest HTML/CSS, JavaScript, and JQuery first. From there, if you'd like to focus just on making web sites with little in the way of entry criteria, then check out the PHP track, otherwise, branch out into the Ruby or Python tracks, and work through the site at your own pace. It's not going to be the be all and end all destination to learn about programming, but seriously, you can make a pretty big dent with what you can learn here.

Tuesday, April 22, 2014

Selenium SF Live: An Evening With Dave Haeffner

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home. 

Saturday, April 19, 2014

TECHNICAL TESTER, err, Saturday: Pain Fog and Objective Completion

Yes, I know, I'm a day late with this. Actually, I'm closer to a week and a day late with this, but reality decided to remind me that I'm not 21 any longer.

Last week, April 9th, as I was getting off the train, I stood up and reached over to grab my bag. The "twinge" I felt above my hip on my right side was a tell tale reminder. I have sciatica, and if I feel that twinge, I am not going to be in for a fun week or two. Sure enough, my premonition became reality. Within 48 hours, I was flat on my back, with little ability to move, and the very act of doing anything (including sleeping) became a monumental chore. To that end, that meant that my progress on anything that was not "mission critical" pretty much stopped. There was no update last Friday because there was nothing to report. I spent most of the last week with limited movement, a back brace, copious amounts of Ibuprofen, any typing only when I had to. I'm happy to report I'm getting much better, but siting for long stretches to code or write was still painful, though less so each day.

Since I'm greatly desirous to move forward, I decided to make a push in the later part of this week to clear the CodeCademy site's three courses related to web development: PHP, HTML/CSS and JavaScript. Late last night, I finished up the course for JavaScript. Yay me :)!!!

As an overall course and level of coverage, I have to give CodeCademy credit, they have put together a platform that is actually pretty good for a self-directed learner. It's not perfect, by any means, and the editor can be finicky at times, but it's flexible enough to allow for a lot of answers that would quality as correct, so you don't get frustrated if you don't pick their exact way of doing something.

Many of the hints offered for each of the lessons are also helpful and don't spoon feed too much to the participant, making you actually stretch and think. While I've known about and tinkered with JavaScript off and on for years, I will honestly say I've learned quite a bit from this courses. I would recommend these three courses (HTML/CSS, PHP and JavaScript) as a very good no cost first stop to learn about these topics. Each does a good job of explaining the topics and concepts individually.

If there was any criticism, it's that there is little in the course examples that integrate the ideas (at least so far). There is a course on jQuery, and I anticipate that that will probably have more to do with actual web component interaction and integration, so that's my next goal to complete. After that, I plan to go back and complete the Ruby and Python modules, and explore their API module as well.

For now, consider this is a modest victory dance, or in this case, a slow moving fist pump. I may need a week or two before I can actually dance ;). Also, next week I'll have some meat to add to this,s since I'm going to start covering some command-line level tools to play with and interact with, and those are a lot more fun to write about!

Wednesday, April 9, 2014

Become a "Coyote" at CAST 2014

Now that the full program has been announced, and my talk is posted and described, I can now say, with a certainty, what I'll be talking about at CAST 2014 this August in New York City.

Actually, I need to qualify that. It's not what I'm going to talk about, it's what "we" are going to talk about.

Harrison Lovell is an up and coming tester with copious amounts of wit, humor and energy. Seriously, he gives me a run for my money in the energy department. I met Harrison through the PerScholas mentorship program, and we have been communicating and working together regularly on a number of initiatives since we first met in September of 2013. The results of those interactions, experiments, and a variety of hits (and yes, some misses here and there), are the core of the talk we will be doing together.

Here's the basics from the sched.org site:

"Coyote Teaching: A new take on the art of mentorship"

Too often, new software testers are dropped into the testing world with little idea as to what to do, how to do it, and where to get help if they need it. Mentors are valuable, but too often, mentors try to shoe-horn these new testers into their way of seeing the world. Often, the result is frustration on both sides.

“Coyote Teaching” emphasizes answering questions with questions, using the environment as examples, and allowing those being mentored the chance to create their own unique learning experience. Coyote Teaching lets new testers learn about the product, testing, the world in which their product works, and the contexts in which those efforts matter.

We will demonstrate the Coyote Teaching approach. Through examples from our own mentoring relationship, we show ways in which both mentors (and those being mentored) can benefit from this arrangement.

“When raised by a coyote, one becomes a coyote”.


Speakers

Michael Larsen
Senior Quality Assurance Engineer, Socialtext
Michael Larsen is Senior Tester located in San Francisco, California. Over the past seventeen years, he has been involved in software testing for products ranging from network routers and switches, virtual machines, capacitance touch devices, video games and distributed database applications that service the legal and entertainment industries.
Read More →

Harrison C. Lovell
Associate Engineer, QA, Virtusa
Harrison C. Lovell is an Associate Engineer at Virtusa’s Albany office. He is a proud alumnus from Per Scholas’ ‘IT-Ready Training’ and STeP (Software Testing education Program) courses. For the past year, he has thrown himself into various environments dealing with testing, networking and business practices with a passion for obtaining information and experience.


Yes, I think this is going to be an amazing talk. Of course, I would say that, because I'm part of the duo giving it, but really, I think we have something unique and interesting to share, and perhaps a few interesting tricks that might help you if you are looking to be a mentor to others, or if you are one who wants to be mentored. One thing I can guarantee, considering the combinations of personalities that Harrison and I will bring to the talk... you will not be bored ;)

Friday, April 4, 2014

Technical Tester Friday: Ladies and Gentleman, JavaScript has Entered the Building

There's nothing like the mild terror one feels when they get back from several days away and then thinks "aw man, I have to post something today!" Too many of my posts have stretched over two weeks, and while I had perfectly valid reasons for that, I said that I's post every Friday, whether I had a lot to talk about or just a little. I've decided the volume of the delivery is less important than the regularity and reliability of having an entry every Friday, and that's what's driving me today.

With the ALM forum and all that surrounded it, as well as getting ready for my talk in presenting my slides, I really didn't have a whole at a time to go through and push my way into learning more about JavaScript and implementing as much of it as I wanted to. I started the Codecademy JavaScript module, and worked through several of the initial entries. When I realized that I wasn't going to be able to have something to show by the end of this week... okay I cheated. Well I didn't cheat. I just went out on the web and I looked at some sample JavaScript projects, and tried to see if I could make sense of what they were doing.

The good news I found a simple JavaScript project that I could apply to the site's navigation bar. If you remember from last week's example, the navigation bar was really just a couple of links horizontally displayed, with brackets on the end to simulate "buttons". This time, we made real buttons with a little interactivity to them. They give a visual indication of which page has been selected (the button is a little larger than the others):






and here's the CSS and JavaScript code that makes the site look better than 1995 ;).




I should stop here and say, again, there are a lot of neat little distractions you can get into with JavaScript. There's potentially a slight barrier to entry to a brand-new web developer. HTML is pretty easy. CSS has some rules, but once you learn them, they don't feel that different compared to native HTML. JavaScript is very similar to PHP, in that you can learn the basics pretty quickly. How to actually use the basics effectively, and in a meaningful way on your pages... that's a bit of an art form, and it's one of the things you're going to have to practice doing. Start small and work out from there.

So there you have it. Again, because of me being out of town and being completely consumed by the ALM Form conference, I did not get a chance to do as much JavaScript hacking as I wanted to, but that will give me a chance for next week to get a little deeper. Perhaps I can pop a little bit of eye candy into the site, so we make it a little more interesting. As always, crawl before you walk, walk before you run, and maybe run before you get on a bicycle or drive a car. Little steps get you in, and I think the project will ultimately become a little more interesting as the defined repeatable things get more and more caned so I can focus on other things :).

Thursday, April 3, 2014

Testing the Limits at #ALMForum: Day Three

Wow, what a week this has been. We're now on day three, the last day, and I'm up in an hour! I'm excited, a little frazzled, but I think we're going to do well. I'm also excited that the four speakers in the breakout today are all good friends; Curtis Stuehrenberg, Seth Eliot and Mark Tomlinson are gonna' help me close lout this conference and we look forward to chatting with as many people as possible that want to look at ways to the change the face and state of software testing. If you are here at ALM Forum, come join us. If you are not able to be, please read on here and take in as much as you can from my notes and observations.

-----

Transforming Software Development in a World of Services with Sam Guckenheimer is the first session, a we are starting out with a thought experiment around Air BnB (the online service to rent rooms and houses. etc. in different cities). A boat on Puget sound is available, so a company can host all of their team members on the boat. What will the experience be? Will it be a fun stay? Will it be too cramped? We don't know, but one thing's for sure, it will be open, it will be public, and good or bad, if people want to talk about it, they will.

This makes for an interesting comparison to Agile development, and the way that agile has shaken out. What had intended to be a relatively private internal housekeeping mode has become a more public viewing. We are social, we are open, we use systems that are often out of our control in the 100% sense of the word. A lot of our practices and actions are not quiet and hidden, they are visible to all who would care to see them. It's a little daunting, but it's also tremendously liberating.

This talk is looking at a Microsoft ideal of "cloud cadence". Customers want regular improvements, we want to maximize the value we provide to our customers, and we know that their feedback is not just for developers, it's seen by everyone. Get it right, we have app store five star reviews. get it wrong, and we can have considerably lower reviews (and don't for a second think those reviews don't matter; it can be the difference between adoption or being totally forsaken).

The DevOps life cycle comes together with three aspects. we have development, we have production, and in between we have the collaboration piece. What's the most important element there? Well, without good development, we have a product that is sub par. With bad deployment, we might have a great product but it won't really work the way we intend it to. The middle piece is  the critical aspect, and that collaboration element is really difficult to pin down. It's not a simple prescription, a set checklist. each organization and project will be different, and many times, the underpinnings will change (from our servers to the cloud, from a dedicated  and closed application to a socially aware application). Sometimes the changes are made deliberately, sometimes the changes are made a little more forcefully. Either way, without a sense of shared purpose or collaboration between the development and production groups, including the tooling necessary to accomplish the goals.

The ability to do all of these things in the Visual sTudio team is the core of Sam's talk, and the interactions with their clients, and the variety of changes that occur drive many of their decisions. They learn from their customers and change direction. They focus on a human to human feedback model (which may sound a little unusual for a giant company like Microsoft, but Sam makes a convincing case :) ).


-----

So this is my talk. no I can’t talk about my talk while I’m giving it, so this is a little canned ;). My topic is "The New Testers: Critical Skills and Capabilities to Deliver Quality at Speed”. If I were to be a little more literal with my title, I’d call it “What you want to have the new testers that you hire know and want to be so that they can be genuinely effective for you and your team… oh, and they may not be the obvious areas you think they need to be".

Software development, and software testing, is undergoing a radical change,. We’ve embraced the idea of changes in development and delivery, but we tend to still look at old school “best practices” in software testing. We’re not still testing the software the previous generation wrote. Development has changed, and testing is changing, and it’s still as relevant as it was before, but we need to approach it differently than we have.

I’m involved in a variety of initiatives that are specifically geared towards teaching software testing to a new generation of testers (and hey, current testers may find the ideas useful, too).

Programs like SummerQAmp, PerScholas, Weekend Testing, the Miagi-do School of Software Testing, and the BBST series of classes are all designed to help software testers not just develop ideas, but real world skills that can help them do their jobs effectively. The community that surrounds testing (in the Twitter, G+, and special forum space) are all doing amazing work to move testing forward. 


So what’s wrong with the old model? We still hear about testing teams, even in so called Agile organizations, that are still doing Heavy Process, Heavy Scripting series of tests. It’s like the development team is Agile, but the test team is expected to still be a waterfall team. Automation makes a lot of promises, and don’t get me wrong, I am pro automation for many things. I use automation. I write automation. I prefer the term Computer Aided Testing, but Automation will suffice. It’s a tool, but it’s not the only tool, and it has been oversold on what it can accomplish. It’s great for repetitive tasks. It’s great for configuration and iteration stepping. It’s lousy at making informed decisions. Though it’s not been a problem I’ve personally dealt with or had to experience, I know that “certification” has been sold as a way to “pre-qualify” testers. As a practical outcome, I think we have failed here, because most of the certifications offered are heavy on passing a test, and light on demonstration of real world skills and the effectiveness thereof.


I believe the New Testers need to focus on a new toolkit and a new attitude. It’s not really new, in fact, in many ways, it’s ancient, but it’s been woefully underutilized. We need testers who are sapient (stealing that from James Bach), but basically meaning we need testers who are actively and critically thinking about what they are doing and observing. Testers need to do more than find bugs, they need to sell those bugs.. Really, what’s more important, lots of bugs, or the championing of important bugs that actually get fixed?

Testers need to return to and have a solid understanding of both the Scientific And Socratic methods. I believe that New Testers will be less button pushers and more scientists, philosophers and skeptics. These are not just testing traits, these need to be embraced by everyone in development. New Testers don’t want to prove the software works. They want to find how it is broken. They want badly to lose the stigma of being the bug shield. They are much better utilized as “beat reporters” sharing a clear story of your product. A thought experiment from Elisabeth Hendrickson that I personally love is “what is the most terrifying headline about your company you could imagine seeing in the paper? Wouldn’t you want your testers to not only find out that terrifying headline, but inform you so that  you could prevent it?"

OK, that’s great. So where can I find these New Testers? You can find them in Computer Science departments at universities. Yes, I’m daring to say it. Most testers have historically fallen into the job, but I am seeing people who are now self selecting to be software testers, and it’s *WONDERFUL*. They are not also-ran programmers, or people who couldn’t hack programming. Some are great programmers, but they have decided that there are other challenges they’d like to deal with rather than stringing code together. And that’s *ALSO* great. The point is, that are not considering testing as a consolation prize, but they are selecting testing on its own merits, and we should recruit them with the same philosophy. Where else can we find great up and coming testers (and to be fair, current great testers)? Check out people with degrees in Humanities, or Journalism, or Psychology. Look for actual scientists who might be looking for a change of pace. Do you have really good Customer Service Representatives? It’s a good bet you have some fantastic testers in that group.

Programs like SummerQAmp, PerScholas, Weekend Testing, BBST Courses, a thriving ecosystem of Bloggers, Newsgroups and Online Magazines and Twitter (yes, Twitter :) ) are at the vanguard of bringing this new paradigm of testing to the fore. Each of these, in their sphere, is looking to help bring real, tangible testing skills to their participants, and give them a chance to show what they can do and improve their craft. Weekend Testing is not just a movement, it’s also a portable model that anyone can use. All you need is Skype, a topic of discussion, a product or project, a mission and some charters, and two hours to interact, instruct and facilitate. If you want to see some amazing testing insights, I encourage you to review just about any Weekend Testing transcript. 

Testers are not a single group, they have many interests and they have their own niches. Some testers will be good at some, better at others, probably not stellar in all, but with a broad team with recognition of this, you may be surprised at the powerhouse you can develop when you get the Explorers, the Performance Tweakers, the Toolsmiths (automation, CI, deployment tools, etc), the Evil Masterminds (Security), the Humanists (human factors, usability), and the Storytellers together. Just don’t make the mistake in thinking you can get this all in one person. You may get attributes of all in one tester, but none of us can be experts at all of these, or should I say, very few of us can be (I’m certainly not one of them).

In all the new testers will be focused on:

  • less scripting, more active thinking
  • less checking, more real testing
  • less blind faith, more scientific skepticism
  • creative, inventive, intuitive, mindful

In short, the future is now, and I can introduce you to hundreds of them ;). Better yet, why not come join us and see for yourself?


  • SummerQAmp: hire an intern
  • PerScholas: have a chat with recent STeP graduates and their mentors
  • Weekend Testing: Come join us for a session or two and see the magic happen
  • Miagi-do: Do a web search for the term “Miagi-do School of Software Testing”. Or better yet, just ask me ;). 

-----

Curtis Stueherenberg is talking about how to "ACCellerate Your Agile Test Planning". He decided to chuck the Power Point entirely, and decided to give a crash ourse in Agile testing on a live product... specifically, his procut (well, Climate Corp's mobile app, to be specific). His point was to say "what if we have to test a product in two weeks? How about one week? How about three days? What are you going to do?"

Rather than talk it, we all participated in an active testing session, downloading the app to our mobile devices (iPhone and Android only, sorry Windows Phone users :( ). By walking through the steps and the test areas, and using an idea from James Whittaker and Gogle called the ACC model, we all in real time put together sections of risk and areas we would want to make sure that we tested.  In many ways, ACC is a variation on a theme of Session Based Test Management (SBTM). It informs out tests, we act on the guidance, and we pivot and adapt based on what we learn, and we do it quickly.

Much of the interaction was just things we did in real time, and for my money, this was a brilliant way to emphasize this approach. Instead of just talking about it, we all did it. Even if the idea of a formal test plan is not something you have to deal with, give this approach a try. I know I'm going to play with this when I get back home :).

-----

Now it's time for Seth Eliot and "Your Path to Data Driven Quality" and a roadmap towards how to use the data that you are gathering to help guide you to your ultimate destination. Seth wants to make the point that testing is measurement, and you can't measure if you don't have data (well, you can, but it won't really be worth much). Seth asks if we are HiPPO driven (meaning is our strategy defined buy the "Highest Paid Person's Opinion" or were we making decisions based on hard data. Engineering data can help a little bit (test results, bug counts, pass fail rates). They can give us a picture, but maybe not a complete one (in fact, not even close to a complete one). There's a lot of stuff we are leaving on the table. Seth says that leveraging production data (or "near production data") gives us a richer and more dynamic data set. Testers try to be creative, but we can't come close to the wacko randomness of the real world users that interact with our product.

First step: Determine your questions. Use Goal Question Metrics. Start at the beginning and see what you ultimately want to do. Don't just get data and look for answers. Your data will taint the questions you ask if you don't ask the questions first. You may develop a confirmation bias if you look at data that may seem to point to a question you haven't asked. Instead, the data may give you a correlation to something, but it may not actually tell you anything important. Starting with the question helps to de-bias your expectations, and then it gives you guidance as to what the data actually tells you.

Then: Design for production-data quality. There's two types of data we can access. Active and passive data can be used. active data could be test cases or synthetic data of a simulated user. Passive data is using real world data and real users interactions. Synthetic data is safer, but it's by definition incomplete. Passive data is more complete, but there's a danger to using it (compromising identification data, etc.). Staging the data acquisition lets us start with synthetics data (reminds me of my "Attack on Titan" account group that I have lovingly put together when I test Socialtext... yes, I have one. Don't judge me ;) ), to copying my actual account and sharing on our production site (much more rich data, but needs to be scrubbed of anything that could compromise individuals privacy... which in turn gets us back to synthetic data of sorts, but a richer set. Bulk up and repeat. Over time, we can go from having a small set of sample data to a much larger and beefier data-set, with lots more interesting data points.

Then: Select Data sources. There's a number of ways to gather and accumulate data. We can export from user accounts, or we can actively aggregate user data and collect those details (reminds me of the days of NetFlow FlowCollection at Cisco). We need to be clear as to what we are gathering and the data handling privacy that goes with it. Anonymous data is typically safe, sensitive personally identifiable info requires protocols to gather, most likely scrub, or  not touch with a ten foot pole.  Will we be using Infrastructure data, app data. usage. account details, etc. Each area has its unique challenges. Plan accordingly.

Then: Use the right data tools. What are you going to use to store this data. Databases are of course common, but for big data apps, we need something a little more robust (Hadoop is hip in this area). where do you store a Hadoop instance? Split it up into smaller chunks (note, splitting it makes it vulnerable, so we need to replicate it. Wow, big data gets bigger :) ).Using map reducing tools, we can crunch down to a smaller data set for analysis purposes. I'm going to take Seth's word for it, as Hadoop is not one of my strong suits, but I appreciated the 60 second guided tour :) ). Regardless of the data collection and storage, ultimately that data needs to be viewed, monitored, aggregated and analyzed. The tools that do that are wide and varied, but the goal is to drill down to the data that matters to you, and having the ability to interpret what you are seeing.

Then: Get answers to your questions. Ultimately, we hope that we are able to get answers based on the real data we have gathered that will help us either support or dispute our hypothesis (back to the scientific method; testing is asking questions and then, based on the answers we receive, considering and proposing more interesting questions. Does our data show us interesting points to focus our attention? Do we know a bit more about user sentiment? Have we figured out where our peak traffic times are? If we have asked these questions, and gathered data that is appropriate for those questions, if we have been focused on aggregating the appropriate data and analyzing it, we should be able to say "yes, we have support for our hypothesis" or "no, this data refutes our hypothesis". Of course, that leads to even more questions, which means we go to...

Lather. Rinse. Repeat.

Hmmm, Mark Tomlinson just passed me a note with a statement that says "Computer Aided Exploratory Testing"? Hadn't considered it quite that way, but yes, this certainly fits the description. An intriguing prospect, and one I need to play with a bit more :).

-----

Lightning talks! Woo!!! We have four presenters looking to rifle through some quick talks.

Mark Prichard is discussing "Complete Continuous Integration and Testing for Mobile and Web Applications". Mark is with Cloudbees, and he's explaining how they do exactly what the title describes. Some interesting ideas surrounding how to use Jenkins and other tools to make it possible to build multiple releases and leverage a variety of common tools so as to not have to replicate everything for each environment. Leverage the cloud and Platform-as-a-Service for Continuous Delivery. Key takeaway... "ALM in the cloud will become the rule, not the exception". Quote attributed to Kurt Bittner.

--

Mike Ostenberg from SOASTA is next and he's talking about 'Performance Testing in Production, and what you'll find there".  Begs the question... *WHY* do we want to do performance testing in production (isn't that what we call a "customer freak out"... well, yeah, but that's an after effect, and we really want to not go there ;) ). Real systems, real load, real profiling. There's ways we can simulate load on a test environment, but it's not really going to match what happens in the real space. additionally, we want to do our load testing earlier than we traditionally do it. At the end of the cycle, we're a little too far gone to actually pivot base on what we learn.

Load testing in Production, Mike points out, can be done in stages and can be done on different levels. Just as we use Unit tests for components, integration tests for bigger systems, and feature/acceptance tests to tie it all together, we can deconstruct load tests to match a similar paradigm. Earlier load tests are dealing with errors, page loads, garbage collection, data  management, etc.  Regardless of the stage, there are some critical things to look at.

Bandwidth is #1, can everyone reach what they need? Load balancing, or making sure everyone pulls their weight, is also high priority. Application issues; there's no such thing as perfect code. Earlier tests can shake out the system to help show inefficient code, sync issues, etc. Database performance fits in application issues, but it's a special set of test cases. The database, as Mike points out, is the core of performance. Locking and contention, index issues, memory management, connection management, etc. all come into play. Architecture is imperative. Think of matching the right engine to the appropriate car. Connectivity comes into play as well. Latency, lack of redundancy, firewall capacity, DNS, etc. Configuration means we need to get custom and actually see if we mean it. Shared Environments... watch out for those noisy neighbors :). Random stuff comes into play when things are shared in the real world. Pay attention to what they can do for you (or to you ;) ).

I like this staggered approach, it makes the idea of "testing in production" not seem so overwhelming.

--

Now on deck is Dori Exterman, and he's talking about "Reducing the Build-Test-Deploy Cycle from Hours to Minutes at Cellebrite". Hmmm, color me mildly skeptical, but OK, tell me more :). I'm very familiar with the idea of serial build-test-deploy, and I know that that does not bode well. Multi-core systems can certainly help with this, and leveraging multi-core environments can allow us to do a much tighter build-test-deploy pipeline. Parallel processing speeds things up, but there's a system limit, and those system limits are also very costly at their higher end.

So what's the option when we max out the cores on a single system? seems that going parallel to more servers would make sense. Rather than one machine with 32 cores, how about 8 machines with four cores? same number of cores, maybe similar throughput gains (and potentially better since system resources are shared over multiple machines). This approach is referred to as a CI Cluster Farm. Cool, but we're still in a similar ball-park. Can we do better? Dori says yes, and his answers is to use distributed computing within your own network of machines. If I'm hearing this correctly, it's kind of like the idea of letting your machine be used for "protein folding" experiments while your machine is in more idle states (anyone else remember signing up to do stuff like that :)? ). I'm not sure that's what Dori means, but it seems this could be really viable, and we already have an example of that happening (i.e "signing up for protein folding").

How wild would it be to be able to wire up your entire network, everyone's machines, so that they can help speed up the build process? It's a fascinating model. I'd be curious to see if this really comes to fruition.

--

We had another Lightning talk added that came from a Birds of a Feather session about CI/CD, so this is a bit of a surprise. The idea was to see how we could leverage pipelines (mini-builds that run in sequence and individually). mini builds also helps us to build individual components, with a goals to integrate the elements later on. Often, all we want is a Yes/No to see if the change is good, or not (gated check-ins).

This blends into Dori's talk just given on distributed computing and utilizing down times for making an almost unlimitedly parallel build engine. So this is interesting, but what's management going to say about all of this? Well, what is it costing us not to do this? Are we losing time and in effect losing money in the process? Will this help us fix some of our technical debt? If so, it may well be worth considering. If it adds more technical debt, less likely to sell that option.

Another point is that good CI infrastructure will bubble up issues in design and architecture of both the process and the application. Innovation and motivation will potentially increase when changes can be made more frequently, and subsequently, more atomically.

By using information radiators, we can get a clearer sense as to who did what to cause the build to fail. Gadgets (lights, sounds, sensory input) can help make it more apparent and in real time. Not sure if this would be a major plus, but I'm not necessarily the best judge of what developers consider to be fun ;).


-----

The final test track talk, the anchor session, goes to Mark Tomlinson, as he discusses 'roles and Revelations: Embracing and Evolving our Conceptions of Testing". With a title like that, let's just say "you had me at 'hello'" ;).

Mark is a fun guy to listen to (check out his podcast "PerfBytes" to get a feel), and thus, it's fun to hear him do a more narrative talk as opposed to a techy talk. We start out with the idea of what testing is, at least how we look at it historically. We find bugs, we see that we can validate to a spec, we try to reduce costs, and we aim to mitigate risks. Overall, I think if you gave that list to any lay person and said "that's what testing is", they'd probably have little difficulty understanding that. Those definitions are valid, but it's also somewhat limiting. We've seen some interesting milestones over the past 50 years. Debugging, Demonstration, Destruction, Evaluation and Prevention can all be seen as "eras of testing". Mark points out that there are 10 different schools of testing (Domain, Stress, Specification, Risk, Random/Statistical, Function, regression, Scenario, User, and Exploratory).

That's all cool... but what if one day everything changed? Well, one would say that the past 14 years, or since the Agile Manifesto, the Universe did Change... to steal a little from James Burke. We are less likely today to have isolated test groups. We have a lot more alphabet soup when it comes to our titles. I've had lots of titles, lots of combinations, but ultimately all of them could be distilled to a "tester" of some flavor. Some teams have no dedicated testers, or just one dedicated tester. Test Driven Development is an unfortunate term choice, in the fact that what is a design process often gets mistaken for "testing" (nope, it's not. It's checking for correctness, but it is not testing). Out time to be interactive and effective is happening earlier, and I love this fact.

Continuous Integration, Continuous Deployment, Continuous Delivery and even Continuous Testing have entered the vernacular. What does this mean? It's all about trying to automate as many of the steps as humanly possible. Build-Check-Deploy-Monitor-Repeat. Conceive of a time and place where we go from end to end without a person involved, just machines. Sounds great, huh? In some ways, it's awesome, but there's an unfortunate side effect, in that may processes are billed as testing that are not. Checking is what automation does. It's great for a lot of things, but it can't really think. Testing, real testing, requires thinking and judgment. There's been a devaluing of testing in some organizations, or just doing testing is considered a liability. Unless we are all coding toolsmiths, we are of a lesser order... and that's bunk!!!

Ultimately, testing is a cost... seriously. Testing does not make money. testing is a cost center. It's an important cost center, but it is a cost. think of Health Insurance. It is not an investment. It's a cost you have to pay... but when you crash a car or break a leg, then the insurance kicks in, and I'll bet you're happy when you have it (and really frustrated if you don't). that's what testing is. It's insurance. It's a hedge. It's a cost to prevent calamity. With all of the changing going on, we ned to be clear what we are and what we provide.

What we generate, and what real value we provide, is feedback and information. we are not critics. We are not nay-sayers, we are honest (we hope) reporters of the state of reality, or at least as we can potentially be. the really valuable things that we can provide are not automate-able. Yes, I dared to say that :). Computers can evaluate variable values and they can confirm or deny state changes, but they cannot really think, and they cannot make an informed judgment call. They can only do what we as people tell them to.

Change is constant, and we will see more change as we continue. Testers need to be open to change,and realize that, while there is always value that we provide, the way we provide that value, and the mechanisms and institutions that surround them will evolve. If we do not evolve with them, we will be left behind.

Mark emphasizes that software testers are "Facilitators of Quality". Testing is not just limited to dedicated testers, it's dispersing. therefore, we need to emphasize where we can be effective, and that may mean going in totally different directions. Testing provides diversity, if we are willing to have it be a diversifying role. Think of new techniques, expand the way that we can ask questions, learn more about the infrastructure, and figure out ways that we can keep asking questions. The day we stop asking questions is the day testing dies, for real.

Testing can actually accelerate development. I believe this, and have seen it happen in my own experiences. This is where paired developer-tester arrangements can be great. think of the programmer being the pilot, and the tester being the navigator. Yes, if all we ask is "are we there yet?", we don't offer much, but if we watch the terrain, and ask if some ways we've mapped may be better or worse for the time we want to arrive, now we're adding value, and in some ways, we can help them fix issues before they've even been committed. testers provoke reactions. Not to be jerks, but to get people to think and consider what they really should be doing. Do you think you can't do that? If so, why? Give it a try. You may surprise yourself (and maybe a few programmers) with how much you deliver. In short, be the Devil's Advocate as often as possible, and be prepared to embrace the Devil's you don't know ;).

Consider that every tester is an Analyst. It may be formal or informal, but we all are, deep down. we can research quality efforts, we can drill down into data and see patterns and trends, we can also see trends and efficiencies we can add to our repertoire, and adapt, adapt, adapt!

-----

Sorry for the delay for the last bit, but with a rather meta post presentation call with Mark Tomlinson (we did a conference call about how to do podcasts, and in the process, recorded the session... so yeah, we made a podcast about how to do podcasts as an artifact of a meeting about how to do podcasts. Main takeaway, it's fun, but there's more to doing them than many people consider. We just hope we didn't scare everyone off after we were done (LOL!). After that, all of the speakers descended upon Tango Restaurant and had a fabulous dinner courtesy of the ALM Forum organizing staff. Great conversation with Scott Wambler, Curtis Stuehrenberg, Peter Varhol, and Seth Eliot, as well as several others. the nerd brain power in that small room was probably off the charts, and i was honored to have been included in this event. Seattle, thank you for a very busy and truly enjoyable week. For those who have been keeping track of this rather long missive, my thanks to you, too. To everyone who came to my talk and tweeted or retweeted my comments, and who commented back to me about my talk and gave me your impressions, feedback is a gift, and I've received many gifts today. Truly, thank you so much.

With this, I must return back to reality and back to San Francisco early this morning. I've enjoyed out rime together, and I hope that, in some small way, this meandering three days of live blogging has given you a flavor of the event and what I've learned these past few days. Let's do it all again some time :)!!!

Wednesday, April 2, 2014

A San Franciscan in Seattle: #ALMForum Day Two Reflections

Last night, Adam Yuret invited me out to see what the wild world of Seattle Lean Coffee is all about. Having heard from a number of the people who have participated in these events, I decided I wanted to play as well, so my morning was centered around Lean Coffee and meeting a great group of Seattleites and their various roles and areas of expertise.

We covered some interesting topics including the use of Pomodoro and how to make the best use of it (I added the Procrastination Dash to the mix of discussions), the use of SenseMaker and whether or not the adherence to it as a paradigm bordered on religion (it's a framework for helping realize and see results, but it's not magic), some talk about the challenges of defining what technical testing really means (yes, I introduced that topic ;) ), sharing some thoughts on what defines a WIP limit for an organization, and some thoughts about "Motivation 3.0" (based on Daniel Pink's book "Drive").


Great discussions, lots of interesting insights, and an appreciation for the fact that, over time, we see the topics change from being technical to being more humanistic. The humanistic questions are really the more interesting ones, in my estimation. Again, my thanks to Adam and the rest of the Seattle Lean Coffee group for having me attend with them today.

-----

Cloud Testing in the Mainstream is a panel discussion with Steve Winter, Ashwin Kothari, Mark Tomlinson, and Nick Richardson. The discussion has ranged across a variety of topics, staring with what drove these organizations to start doing cloud based solutions (and therefore, cloud based testing) and how they have to focus on more than just the application in their own little environment, or how much they ned to be aware of in between hops to make their application work in the cloud (and how it works in the cloud. as an example, latency becomes a very real challenge, and tests that work in a dedicated lab environment will potentially fail in a cloud environment, mainly because of the distance and time necessary to complete the configuration and setup steps for tests.

Additional technical hurdles have been to get into the idea of continuous integration and needing to test code in production, as well as to push to production regularly. Steven works with FIS Mobile, which caters to banking and financial clients. Talk about a client that was resistant to the idea of continuous deployment, but certain aspects are indeed able to be managed and tested in this way, or at least a conversation is happening where it wasn't before.

Performance testing now takes on additional significance in the cloud, since the environment has aspects that are not as easily controlled (read: gamed) as they would be if the environment were entirely contained in their own isolated lab.

Nike was an organization that went through a time where they didn't have the information that they needed to make a decision. In house lab infrastructure was proving to be a limitation, since they couldn't cover the aspects of their production environment or a real example of how the system would work on the open web. With the fact that OPS was able to demonstrate some understanding through monitoring of services in the cloud, that helped the QA team to decide to collaborate and help understand how to leverage the cloud for testing, and how leveraging the cloud made for a different dialect of testing, so to speak.

A question that came up was to ask if cloud testing was only for production testing, and of course the answer is "no", but it does open up a conversation about how" testing in production" can be performed intentionally and purposefully, rather than something to be terrified about and say "oh man, we're testing in PRODUCTION?!" Of course, not every testing scenario makes sense to be tested in production (many would be just plain insane) but there are times when it does make a lot of sense to do certain tests in production (a live site performance profile, monitoring of a deployment, etc.).

Overall an interesting discussion and some worthwhile pros and cons as to why it makes sense to test in the cloud. Having made this switch recently, I really appreciate the flexibility and the value that it provides, so you'll hear very few complaints from me :).

-----
Mike Brittain is talking about Principles and Practices of Continuous Deployment, and his experiences at Etsy. Companies that are small can spin up quickly, and can outmaneuver larger companies. Larger companies need to innovate of die. There are scaling hurdles that need to be overcome, and they are not going to be solved overnight. There also needs to be a quick recovery time in the event something goes wrong.  Quality os not just about testing before release, it also includes adaptability and response time. Even though the ideas of Continuous Deployment are meant to handle small releases frequently performed, there still needs to be a fair amount of talent in the engineering team to handle that. The core idea behind being able to be successful in Continuous Development is the idea of "rapid experimentation".

Continuous Delivery and Continuous Deployment share  a number of principles. First is to keep the build green, no failed tests. Second is to have a "one button" option. Push the button, all deployment options are performed. Continuous Deployment breaks a bit with the fact that every passing build is deployed to production, where continuous delivery means that the feature is delivered with a business need. Most of the builds deploy "dark changes", meaning code is pushed, but little to no changes are visible to the end user (thin CSS rules, unreferenced code, back end changes, etc.). A Check in triggers a test. If clean that triggers automated acceptance tests. If that passes, then it triggers the need for user acceptance tests. If that's green, then it pushes the release. at any point, if the step is  red, then it will flag the issue and atop the deploy train.

Going from one environment to another can have unexpected changes. How many times have you heard "what do you mean it's not working in production? I tested that before we released!" Well, that's not entirely surprising, since our test environment is not our production environment. Question of course is, where's the bug? Is it in the check ins? Are we missing a unit test(s)? are we missing automated UA tests (or manual UA tests)? Do we have a clear way of being identified if something goes wrong? What does a roll back process look like? All of these are still issues, even in Continuous Deployment environments. One avenue Etsy has provided to help smooth this transition is a setup that does pre-production validation. Smoke tests, Integration tests, Functional and UA tests are performed with hooks into some production environment resources, and active monitoring is performed. All of this without having to commit the entire release to production, or doing so in stages.

Mike made the point that Etsy pushes, approximately, about 50,000 lines of code each month. With a single release, there's a lot of chances for there to be bugs clustered in that single release. By making many releases over the course of days, weeks or months. The odds of a cluster of bugs appearing are minimal. Instead, the bugs that do appear are isolated and considered within their release window, and their fix likewise tightly mirrors their release.

This is an interesting model. My company is not quite to the point that we can do what they are describing, but I realized we are also not way out of the ballpark to consider it. It allows organizations to iterate rapidly, and also to fix problems rapidly (potentially, if there is enough risk tolerance build into the system). Lots to ponder ;).

-----
Peter Varhol is covering one of my favorite topics, which is Bias in Testing (specifically, cognitive bias). Peter started his talk by correlating the book "Moneyball" to testing, and that often, the stereotypical best "hitter/pitcher/runner/fielder/player" does not necessarily correlate to winning games. By overcoming the "bias" that many of the talent scouts had, he was able to build a consistently solid team by going beyond the expectations.

There's a fair amount of bias in testing. That bias can contribute to missing bugs, or testers not seeing bugs, for a variety of reasons. Many of the easy to fix options (missing test cases, missing automated checks, missing requirement parameters) can be added and covered in the future. The more difficult one is our own biases as to what we see. Our brains are great at ambiguity. they love to fill in the blanks and smooth out rough patches. even when we have a "great eye for detail", we can often plaster over and smooth out our own experience, without even knowing it.

Missed bugs are errors in judgment. we make a judgment call, and sometime we get it wrong, especially when we tend to think fast. When we slow down our thinking, we tend to see things we wouldn't otherwise see. case in point: if I just read through my blog to proof-read the text, it's a good bet I will miss half a dozen things, because my brain is more than happy to gloss over and smooth out typos; I get what I mean, so it's good enough... well, no, not really, since I want to publish and have a clean and error-free output.

Contrast that with physically reading out, and vocalizing, the text in my blog as though I am speaking it to an audience. This act alone has helped me find a large number of typos that I would otherwise totally miss. The reason? I have to slow down my thinking, and that slow down helps me recognize issues I would have glossed over completely (this is the premise of Daniel Kahneman's "Thinking, Fast and Slow".  To keep with the Kahneman nomenclature, we'll use System 1 for fast thinking and System 2 for slow thinking.

One key thing to remember is that System 1 and System 2 may not be compatible, and they may even be in conflict. It's important to know when we might need to dial in one thought approach or the other. Our biases could be personal. They could be interactional. they could be historical. they may be right a vast majority of the time, and when they are, we can get lazy. We know what's coming, so we expect it to come. when it doesn't we are either caught off guard, or we don't notice it at all. "Representative Bias" is a more formal way of saying this.

When we are "experts" in a particular aspect, we can have that expertise work against us as well. we may fail to look at it from another perspective, perhaps that of a new user. This is called "The Curse of Knowledge".

"Congruence Bias" is where we plan tests based on a particular hypothesis, whereas we may not have alternative hypotheses . If we think something should work, we will work on the ways to support that a system works, instead of looking at areas where a hypothesis might be proven false.

'Confirmation Bias" is what happens when we search for information or feedback that confirms our initial perceptions.

"The Anchoring Effect" is what happens when we become to convinced on a particular course of action that we become locked into a particular piece of information, or a number, where we miss other possibilities. Numbers can fixate us, and that fixation can cause biases, too.

" Inattentional Blindness" is the classic example where we focus on a particular piece of information that they miss something right in front of them (not a moonwalking bear, but a gorilla this time ;) ). there are other visual images that expand on this.

The "Blind Spot Bias" comes from when we evaluate our decision making process compared to others. With a few exceptions, we tend to think we make better decisions than others in most areas, especially those we feel we have a particular level of expertise.

Most of the time, when we find a bug, it's not because we have missed a requirement or missed a test case (not to say that those don't lead to bugs, but they are less common). Instead, it's a subjective parameter. We're not looking at something in a way that could be interpreted as negative or problematic. This is an excellent reminder of just how much we need to be aware of what and where we can be swayed by our own biases, even by this small and limited list. There's lots more :).

-----

-----
More to come, stay tuned.

Tuesday, April 1, 2014

Live From Seattle, it's #ALMForum: A TESTHEAD Live Blog

Good morning everyone. I'll be coming at you live from Seattle at various times of the day. This is a live blog, and as such, it's going to be stream of consciousness, it may contain mistakes, and it may also have gaps in logical flow. If you want to see the real time feed, an ability to handle ambiguity will help. If you can't handle a touch of ambiguity, wait until later in the day when I get a chance to clean things up a bit ;).

We start out with Scott Ambler (@scottwambler on Twitter) and a discussion of  Disciplined Agile Delivery and how to scale Agile practices in larger organizations. Scott made an few points about the fact that Agile is a process with a lot of variations on the theme. Methodologies and methods are all nice, but each organization has to piece together for themselves which of the methods will actually work. Scott has written a book called Disciplined Agile Delivery (DAD). The acronym of DAD is not an accident. Key aspects of DAD are that it is people first, goal drive, it';s a hybrid approach, learning oriented, utilizes a full delivery lifecycle, try to emphasize the solution, not just the software. In short DAD tries to be the parent; it gives a number of "good ideas" and then lets the team try to grow up with some guidance, rather than an iron hand.

Questions to ask: what are the variety of methods used? What is the big picture? While we can look at a lot of terminology, and we can say that Scrum or agile processes are loose form and just kind of happen, that's not really the case at all.  Solution delivery is complex, and there's a lot of just plain hard reality that takes place. Most of us are not working on the cool new stuff. We're more commonly looking at adding new features or enhanced features to stuff that already exists. Each team will probably have different needs, and each team will probably work in different ways. DAD is OK with that.

Scott thankfully touched on a statement in a keynote that made me want to throw the "devil horns" and yell "right on!" there is no such thing as a best practice; there are good practices in some circumstances, and those same practices could be the kiss of death in another situation. Granted, those of us who are part of the context-driven testing movement, this is a common refrain. the fact that this is being said in a conference that is not a testing conference per se brought a big smile to my face. the point is, there are many lean and agile options for all aspects of software delivery. The advice we are going to get is going to conflict at times, it's going to fit some places and not others, and again, that's OK.

Disciplined agile delivery comes down to asking the questions around Inception (How to we start?), Construction (What is the solution we need to provide?), Transition (How to we get the software to our customers?) and Ongoing (what do we do throughout all of these processes?).

For years, we used to be individually focused. We all would do our "best practices" and silo ourselves in our disciplines. Agile teams try to break down those silos, and that's a great start, but there's more to it than that. Our teams need to work with other teams, and each team is going to bring their own level of function (and dysfunction). this is where context comes into play, and it's one of the ways that we can get a handle on how to scale our methods. While we like the idea of co-location, the fact is that many teams are distributed. Some teams are partially dispersed, others are totally dispersed (reminds me of Socialtext as it was originally implemented; there was no "home office" in the early days).  Teams can range from small (just a few people), medium (10-30 people), and large teams (we think 30+ is large, other companies look at anything less than 50 people as small teams). The key point is that there are advantages and disadvantages regarding the size of your team. Architecture may have a full architecture team with representatives in each functional group. Product owners and product managers might also be part of an over arching team where representatives come from smaller groups and teams.

The key point to take away from this is that Agile transformations are not easy. They require work, they take time to put into place, there will be mis-steps, there will be variations that don't match what the best practices models represent. the biggest challenge is one of culture, not technology. Tools and scrum meetings are fairly easy. Making these a real part of the flow and life of the business takes time, effort and consistent practice. Don't get too caught up in the tools doing everything for you. They won't.  Agile/Scrum is  a good starting point, but we need to move beyond this. Disciplined Agile Delivery helps us up our game, and gets us on a firmer footing. Ultimately, if we get these challenges under control with a relatively small team, we can look to pulling this off with a large enterprise. If we can't get the small team stuff working, Agile scaling will be pretty much irrelevant.

My thanks to Scott for a great first talk, and now it's time to get up and see what else ALM forum has to offer.
-----

I'm going to be spending a fair amount of my time in the Changing Face of Testing Track. I've already connencted with some old friends and partners in crime. Mark Tomlinson and I are probably going to be doing a fair amount of cross commenting, so don't be surprised if you see a fair amount of Mark in my comments ;).

Jeff Sussna is taking the lead for us testers and talking about how QA is changing, and how we need to make a change along with it. We're leaving industrialism (in many ways) and we are embarking on a post-industrial world, where we share not necessarily things, but we share experiences. We are moving from a number of paradigms into new paradigms:

from products to services: locked in mechanisms are giving way to experiences that speak to us individually. The mobile experience is one of the key places to see this. People who have negative experiences don't live with it, they drop the app and find something else.

from silos to infusion: being an information silo used to give a sense of job security. It doesn't any longer. Being able to interact with multiple organizations and to be adaptable is more valuable that being someone who has everything they know under lock and key.

from complicated to complex: complicated is predictable, it's bureaucratic, it's heavy. Complex is fragmented. It's independent, it doesn't necessarily follow the rules, and as such it's harder to control (if control is possible at all).

from efficient to adaptive: efficiency is only efficient when the process is well understood, and the expectations are clearly laid out. Disruption kills this, and efficiency gives way when you can't predict what is going to happen. This is why adaptability is more valuable than just efficiency. Learn how to be adaptive and efficient? Now you've got something ;).

The disruption that we see in our industry is accelerating. Companies that had huge leads and leverage that could take years to erode are eroding much faster. Disruption is not just happening, it's happening in far more places. Think about Cloud computing. Why is it accelerating as a model? Is it because people are really all that interested in spinning up a bunch of Linux instances? No, not really. The real benefit is that we can create solutions (file sharing, resource options, parallel execution) where we couldn't before. We don't necessarily care about the structure of what makes the solution, we care that we can run our tests in parallel in far less time than it would take to run them on a single machine in serial. Dropbox is genius not because it's in the cloud, it's genius because any file I really care about I can get to anywhere, at any time, on any device, and I can do it with very little physical setup and maintenance (changes delivered in an "absorbable manner").


Think of Netflix and their “chaos monkey”. They go in and turn instance off. They deliberately break stuff. They want to see what they might be able to find. “I don’t always test my code, but when I do, I do it in production”. That’s supposed to be a joke, but believe it or not, there is a great benefit to testing in production. this is why I am very invested in using my company’s product on their production servers, and looking at issues based on workflows I depend upon.

So what does this all mean for testers and testing? Does this mean that our role is being usurpsed? No, but it does mean our role is changing. Instead of having to babysit machines and be the isolated gatekeeper, we can test more intelligently and with a greater sense of adventure. We can also emphasize that testing goes beyond just performing scripted steps. We can also test more than just the code that we receive, when we receive it. We can test requirements. We can provoke questions. More to the point, we can be a feedback loop to the organization. If an organization believes in being truly adaptive, then it is, effectively, an environment that is friendly to QA.

Mark and I had a little fun considering some of the ramifications as presented, and since Mark said he has some debatable comments he'll be sharing in his talk, I'm going to hold off and not comment until then (stay tuned for further details ;) ).  Suffice it to say, testers are notorious for not necessarily agreeing across the board. That's also part of testing. If we agreed 100%, I'd be deeply worried about the state of our profession.

Testing covers a lot of areas. User testing validates usability. Unit tests can cover code functionality, but there's a lot of space in between those areas that get so much attention. There are lots of "ilities" we need to be paying attention to.

Retros are a good opportunity to see what went well and what can go better, but the technique only works when it's done on a frequent enough level, and the feedback is substantive.

What we definitely need to get away from is "Discontinuous Quality". Let's stop talking about QA wagging the dog. Let's not save testing until the end, where we find problems and tell people about them, only to be said that we are the bottleneck stopping the organization from releasing. Instead, let's get to the party earlier. Let's check out ideas earlier. Let's understand what we are able to contribute, and in as many places as we can.  Ultimately, we are not delivering functionality, we are delivering the ability to help accomplish goals and accomplish objectives. How we do it is not nearly as important as the fact that we actually do it, and do it in a way that is both effective and adaptable.

For me, the one most common thing I can think of to help this is the term "QA". I do my best to not use that term at all if I can get away with it. If I'm asked if I'm in QA, I always answer "yes, I'm a software tester". We have to get out of the business of assuring quality, because we really can't do that. we can inform, we can evangelize, we can enlighten, but we really can't assure anything. What we can do is test, and weave a compelling story. Ultimately, the story is the most important thing we can deliver, as it's the narrative that really defines if a solution goes out or doesn't.

-----

Ken Johnston (@rkjohnston) is talking about EaaSY, or 'Everything as a Service, Yes!".  Ken wants to help us see what the role of testing actually is. It's not really about quality assurance, but more about Risk assessment and management. I agree with this, in the sense that, in the old school environments I used to work in, especially when I worked for a game publisher, when a bug shipped to production, unless is was particularly egregious, it was eternal.  In the services world, and the services model, since software is much more pliable, and much more manageable, there's no such thing as a "dated ship". We can udate all the time, and with that, problems can be addressed much more quickly. With this model, we can be less forced into slotted times. We can update a bug in the same day. we can release a new feature in a week where it used to take a quarter or a year.

EaasY covers a number of parts to be made to be effective.

Componentization: break out as much of the functionality from external dependencies as possible.

Continuous Delivery: Requires Continuous Stability. It needs to have a targeted set of tests, an atomic level of development, and likely is an area that can be deployed/fixed with a low number of people being impacted by the change (the more mission critical, the less likely a Continuous Delivery model will be the desired approach. Not impossible, but probably not the best focus (IMO).

User Segmentation: When we think of how to deploy to users, and we can use a number of methods to do that. we can create concentric rings, with the smallest ring being the most risk tolerant users, and expanding out to a larger set of users, the farther out we get, the more risk averse the users. Additionally, we can use tools like A/B testing, to see how two groups of people react to a change as structured one way or another (structure A vs. Structure B). This is a way to put into production a change, but have a small group of people see it and react to it.

Runtime Flags: Layers can be updated independently. We can fork traffic through the production path and at key areas, data can be forked and routed through a different setup, and then reconvene with the production flow (this is pretty cool, actually :) ). Additionally, code can be pushed, but it can be "pushed dark", meaning it can be put in place but turned on at a later time.

Big Data: Five "Vs" (Volume, Variety, Velocity, Verification, Value). These need to be considered for any data driven project. The better each of these is, the more likely we will be successful in utilizing big data solutions.

Minimum Viable Product: Mark callup on Seth Eliot's "Big Up Front Testing" (BUFT) and says "say no to "BUFT". With a minimum viable product, we need to scale our testing to a point where we can have a MVP, and appropriate testing for the scale of the MVP. Additionally, there are options where we can Test in Production (not full scale, of course).

Overall, this was a very interesting approach and idea. Many of the ideas and approaches described sound very similar to activities we are already doing in Socialtext, but it also gives me areas where I can see that we can do better.

-----

James Whittaker (@docjamesw) is doing the next plenary session, called "A Future Worth Wanting". First we start with our own devices, our own apps, we own them, they're ours, but they aren't particularly useful if they don't connect to a data source somewhere (call it the web and the cloud for simplicity). James is making the point that there's a fair amount of stuff in between that we are not including. The Web browser is one of these middle point items. The app store is another. We know what to do and how to do it, we don't give it much thought. Could we be doing better?

Imagine getting an email, then having to research where an event is, how much tickets are, and how we could handle transactions (using "entities") and we can use those entities and we can find out information and perform transactions based on those entities. Frankly, this would be cool :).

What if we were a calendar? We are planning to do something, some kind of activity that we need to be time focused for. What do we naturally do? We jump to a browser and go figure out what we need. what of our calendar could use those entity relationships and do the search for us, or better yet, return what has already been searched for based on the calendar parameters? Think of writing code? Wouldn't it be cool to find a library that could expand on what you are doing or do what you are hoping to do?

The idea here is to be able to track "entities" to "intents", and execute those intents. Think about being able to call up a fact checking app in PowerPoint, and based on what you type, you get a return specific to your text entry. Again, very interesting. The key takeaway is that our apps, our tools, our information needs are getting tailored to exactly the data we want, from the section of the web or cloud that we actually need.

This isn't a new concept, really. This is the concept of "agents" that's been talked about for almost two decades. The goal we want is to be able to have our devices, our apps, our services, etc, be able to communicate with each other and tell us what we need to know when we need to know it. It's always been seen as a bit of a pipe dream, but every week it seems like we are getting to see and know more examples that make that pipe dream look a little less far fetched.

Goals we want to aim for:

- Stop losing the stuff we've already found
- Localize the data and localize the monetization
- Apps can understand intent, and if they don't, they should. Wouldn't it be great if based on a search or goal, we can download the appropriate apps directly?
- make it about me, not my device


Overall, these are all cool ideas, and yes, these are ideas I can get behind (a bit less branding, but I like the sentiment ;) ).

-----

Alexander Podelko (@apodelko) wants to have us see a "Bigger Picture" when it comes to load testing. There's a lot of terminology that goes into load testing and they are often interchangeable, but not always. the most common image we have of Load testing (and yes, I've lived this personally) is the last minute before deployment, we put some synthetic tests together in our lab, try to run a bunch of connections, see what happens, and call it a day and push to production. as you might guess, hilarity ensues.

The problem with this is not just the lateness, or the inability to really match our environment, but that we miss a lot of stuff. There's a lot of options to load testing that can give us a more broad picture (as the talk suggests). Some of the other issues that load testing brings is the fact that each tool has limitations to what it can cover, as well as the robustness that can be provided by the various tools (as you might guess, JMeter does not solve every load testing problem... I know, contain your shock and dismay ;) ).

As Alexander points out quite appropriately, web sites were simple for a very brief window of time. They are expanding to be more complex and less controllable through standard and simple tools that would cover everything in one place. There are a variety of tools that can be used, ranging from open sources to commercial tools. The more complicated the system, the less likely one tool will be able answer the needs.

Overall, load testing looks to have some of the broadest challenges for the systems that are meant to be tested, at least if we want to create load that is not completely synthetic and generally meaningless. Making load tests that are complex, heterogeneous, and indicative of real world traffic are possible, but the more unique and real world the traffic you wish to emulate, the more difficult the process to actually provide that simulated traffic actually is.


-----

In the mid afternoon, they held a number of Birds of a Feather sessions, to provide some more interactive conversations, and one of the was specifically about how to use GIT. Granted, I'm pretty familiar with GIT, but I always appreciate seeing how people use it and seeing different ways to use it that I may have not considered.

One of the tools that they used for the demonstration was to "Learn Git Branching", which displays a graphical representation of a variety of commits, and shows what commands actually do when they are run (git commit, git merge, rebase, etc.).

-----

The last session of the day is being delivered courtesy of Allan Wagner, and the focus is on continuous testing, or why we would want to consider doing continuous testing. The labor costs are getting higher, even with outsourcing options considered, test lab complexity is increasing, and the amount of testing required keeps growing and growing. OK, so let's suppose that Continuous Testing is the approach you want to go with (I hope it's not the only approach, but cool, I can go with it for this paradigm), where do you start?

For testers to be able to do continuous testing, they need:

- production like test environments (realistic and complete
- automated tests that can un unattended
- orchestration from build to production which is reliable, repeatable and traceable

One very good question to ask is "how much time do you spend doing repetitive set up and tear down of your test environments?" In my own environment, we have gotten considerably better in this area, but we do still spend a fair amount of time to set up our test environments. I'm not entirely sure that, even with service virtualization, there would be a tremendous increase in time saved for doing spot visual testing. While I do feel that having automated tests is important, I do not buy into the idea that automated testing only is a good idea. It certainly is a big plus and a necessary methodology for unit tests, but beyond that, trying to automate all of the tests seems to fall under the law of diminishing returns. I don't think that that is what Allan is suggesting, but I'm adding my commentary just the same ;).

Service Virtualization looks to try to create, as its name describes, the ability to make elements hat are unavailable available for testing. It requires mocks and stubs to work, where you can simulate the transactions rather than try to configure big data hardware or front end components that don't yet exist for our applications.

Virtual Components need to fit certain parameters. They need to be simple, non-deterministic, data-driven, using a stateful data model, and have functionality where we can easily determine their behavioral aspects.

The key idea is that, as development continues, the virtual components will be replaced with the real components and start looking at additional pieces of later functionality.  In other examples, the virtualized components may be those that simulate a third party service that would be too expensive to have part of the environment as a regular part of the development process.

Allen made the point in his talk that Continuous Testing is not intended to be the be all and end all of your testing, but it is meant to be a way to perform automated testing as early as possible and as focused as possible so that the drudge work of set-up tear down, configuration change and all of the other time consuming steps can be automated as much as possible. This is meant to allow the thinking testers to do the work that really matters, which is to perform exploratory testing and let the tester genuinely think. That's always a positive outcome :).

-----

From here' it's a reception, some drinks, and some milling about, not to mention dinner and chilling with the attendees. I'll call this a day at this point and let you all take a break from these updates, at least for today. Tomorrow, I'm going to combine two events, in that I'll be taking part in SEALEAN (Lean Coffee event) and then picking up with the ALM Forum conference again  after that. Have a good night testing friends, see you tomorrow morning :).

End of Entry: 04/01/2014: 05:20 p.m. PDT