Wednesday, July 16, 2014

A Time and a Season to all Things

Today I sent the following message to the members of the Education Special Interest Group of the Association for Software Testing:


Hello everyone!

Three years ago at this time, I took on a challenge that no one else wanted to take on. I realized that there was a lot at stake if someone didn't [added: the AST BBST classes might cease], and thus a practitioner, with little academic experience, took over a role that Cem Kaner had managed for several years. I stepped into the role of being the Education SIG Chair, and through that process, I learned a lot, we as a SIG have done a lot, and some interesting projects have come our way to be part of (expansion of AST BBST classes and offerings, SummerQAmp materials, PerScholas mentoring program, etc.). It's been a pleasure to be part of these opportunities and represent the members of AST in this capacity.

However, there is a time and a season for all things, and I feel that my time as an effective Chair has reached its end. As of July 15, 2014, I have officially resigned as the Chair of the Education Special Interest Group. This does not mean that I will stop being involved, or stop teaching BBST courses, or stop working on the SummerQAmp materials. In fact, it's [my] desire to work on those things that has prompted me to take this step. Even I and my hyper-involved self has to know his limitations.

I have asked Justin Rohrman to be the new Chair of the Education Special Interest Group, and he has graciously accepted. Justin is more than capable to do the job. In many ways, I suspect he will do a better job than I have. I intend to work with him over the next few weeks to provide an orderly transition of roles and authority so that he can do what I do, and ultimately, so I can stop doing some of it :).

Justin, congratulations, and thank you. EdSIG, I believe wholeheartedly you shall be in good hands.

Regards,
Michael Larsen
Outgoing EdSIG Chair


To everyone I've had the chance to work with in this capacity over the past three years, thank you. thank you for your patience as I learned how to make everything work, for some definition of "work". Thank you for helping me learn and dare to try things I wasn't aware I could even do. Most of all, thanks for teaching me more than I am sure I have ever taught any of you over these past three years.

As I said above, I am not going away. I am not going to stop teaching the BBST courses, but this will give me more of an opportunity to be able to teach them, or assist others in doing so, which is a more likely outcome, I think. It also frees me up so I can give more attention to participating in programs that matter a great deal to me, such as SummerQAmp and PerScholas. As I said above, I believe Justin will be fantastic, and I'll be just a phone call or email message away from help if he should need it ;).

Friday, July 4, 2014

Packt Publishing Turns Ten - We Get The Presents :)

Over the past several years, I have been given many books to review by Packt Publishing, and it's safe to say a substantial part of my technical library comes from them. As a blogger who receives these books, usually for free, I like to help spread the word for publishers when they make special offers for their readers.

Below is a message from Packt Publishing that celebrates their tenth anniversary as a publisher, and an announcement that all of their ebook and video titles are, until Saturday, July 5, 2014, available for $10 each.

To take advantage, go to http://bit.ly/1mWoyq1

Packt’s celebrates 10 years with a special $10 offer

This month marks 10 years since Packt Publishing embarked on its mission to deliver effective learning and information services to IT professionals. In that time it’s published over 2000 titles and helped projects become household names, awarding over $400,000 through its Open Source Project Royalty Scheme.

To celebrate this huge milestone, from June 26th, every e-book and video title is $10 each for 10 days – this promotion covers every title and customers can stock up on as many copies as they like until July 5th.

Dave Maclean, Managing Director explains ‘From our very first book published back in 2004, we’ve always focused on giving IT professionals the actionable knowledge they need to get the job done. As we look forward to the next 10 years, everything we do here at Packt will focus on helping those IT professionals, and the wider world, put software to work in innovative new ways.

We’re very excited to take our customers on this new journey with us, and we would like to thank them for coming this far with this special 10-day celebration, when we’ll be opening up our comprehensive range of titles for $10 each.

If you’ve already tried a Packt title in the past, you’ll know this is a great opportunity to explore what’s new and maintain your personal and professional development. If you’re new to Packt, then now is the time to try our extensive range – we’re confident that in our 2000+ titles you’ll find the knowledge you really need , whether that’s specific learning on an emerging technology or the key skills to keep you ahead of the competition in more established tech.

More information is available at: http://bit.ly/1mWoyq1

Tuesday, July 1, 2014

Coyote Teaching, Lousy Estimates, and Making a Pirate

Several weeks ago I made a conscientious decision. That decision was to focus on one goal and one goal only. I'll admit it's a strange goal, and it's difficult to put into simple words or explain in a way that many people will understand, but for long time readers of this blog, that shouldn't come as a surprise ;).

In August, I'm going to be presenting a talk with Harrison Lovell about Coyote Teaching, and the ways in which this type of teaching can help inform us better than rote example and imitation. As part of this process, I thought it would be fun to take something that I do that is completely outside the realm of software testing, and see what would happen if I applied or examined Coyote Teaching ideas and techniques in that space. Personally, I found the results very interesting, and over the next few days, I'm going to share some of what I learned, and how those lessons can be applied.

What was this unusual project? It's all about "playing pirate" ;).

OK, wait, let me back up a bit...

One of the things I've been famous for, over several Halloweens, has been my elaborate and very involved pirate costumes. Why pirates? They fascinate me, always have. They are the outsiders, the ones who dared to subvert a system that was tyrannical, and to make a world that was built on their own terms. Granted, that world was often bloody, violent, deceptive, and very dangerous, with a strong likelihood the pirates would be killed outright or publicly executed, yet it has captured the imaginations of generations through several centuries.

Here in Northern California, there is an annual affair called the Northern California Pirate Festival. Many people dress up and “play pirate” at this event, and last year I made a commitment that I would be one of them. More to the point, I decided I wanted to go beyond just “playing pirate”, I wanted to get in on the action. Now, in this day and age, get in on the action doesn't mean “become an actual pirate”, it means join the ranks of the re-enactors. This year was a small step, in that I chose to volunteer for the festival, and work in whatever capacity they needed. With this decision, I also opted to go beyond the Halloween tropes of pirates, and actually research and bring to life a composite character from that time, and to pay special attention to the clothes, the mannerisms, and the details of the particular era.

Most people when they look at popular representation of pirates, they're looking at tropes that represent the Golden Age of Piracy, that period in the early 1700s where many of the famous stories are set (The Pirates of the Caribbean franchise, Treasure Island, Black Sails, etc.). What this ignores is the fact that piracy had been around for millennia, and there were other eras that had a rich history, and an interesting look, all their own.

To this end, I decided that I wanted to represent an Elizabethan Sea Dog. My goal was to have people walk up to me, and say “hey, you look different than most of the people here”, and then I could discuss earlier ages of piracy, or in my case, privateering (and really, if you were on the side of the people being attacked, that difference was mostly irrelevant).

To make this a little more interesting, I decided to make my outfit from scratch. The only items I did not make from scratch were my boots, my hat, and the sword and dagger that I chose to carry. Everything else would be hand made, and here is where our story really begins.

The first order of business if you choose to be a re-enactor, is to do research. If your character is a real person, you need to know as much as possible about not just their personal histories, but about their time period, where they came from, the mores of the day, the situations that may have driven someone to be on the high seas in the first place, and those decisions that might potentially lead them to being privateers or pirates. Even if the character you are reenacting is fictitious, you still want to be able to capture these details. I spent several months reading up and examining all of these aspects, but I gave the clothes of the era special attention. What did a mariner in the mid-1500s actually wear? To this end, I came up with a mental picture of of what I wanted my Sea Dog to look like. My Sea Dog would have high Calvary style boots, long pumpkin breeches, a billowy Renaissance style shirt, a close-fitting jacket (referred to as a “doublet”), a thicker outer jacket, called a jerkin, and would wear what was called a "Tudor Cap". I would also make a wide belt capable of carrying both a rapier and a main gauche (a parrying dagger used in two handed dueling, common for the time period). I would make the “frogs”, or the carriers for the sword and dagger. I’d also make a simple pouch to hold valuables. Just a handful of items. It didn't seem that complicated. As one who already knew how to sew, and has had experience making clothes in the past, I figured this was a project I could knock out in a weekend.

Wow, was I ever wrong!

I thank you if you have stuck with me up to this point, and you may be forgiven if you are thinking "wow, that's quite a buildup, but what does this have to do with the Coyote Teaching method?" Well, let's have a look, starting with the first part of the project, the pumpkin breeches.

Through my research I decided I wanted to create something that look dashing, and a little dangerous, and I decided that I would use leather and suede in many of the pieces.  The problem with using leather and suede is that it doesn't come on a regular sized bolt of fabric. In fact, real leather and suede is some of the most irregular material you can work with, since it entirely depends on the particular hide you are examining. I quickly realized that I had no pieces that would give me a size to cut a full leg portion from any of my pattern pieces. What to do? In this case, I decided to piece long strips of three inch wide suede together. This would give the look of “panel seams”, and give the sectional look that is common for pumpkin breeches.

So let’s think about the easy part. Make a pair of pants. Just cut some material, and stitch it together, right?

Here are the steps that making these pants really entailed:

- taking out multiple suede hides and examining them
- cutting away the sections that would be unusable (too thin, too thick, holes or angles that couldn’t be used, etc.)
- lay out the remaining pieces and utilize a template to cut the strips needed. Repeat 40 times.
- take regular breaks because cutting through suede is tiring.

Irregular suede pieces cut to a uniform width and length.

- baste the pieces together and stitch them down the length of the strips, so as to make panels that were ten strips wide.
- make four of these panels.

Stitched composite panels. Each section of ten strips
is used for half or each leg (4 panels total)

- size the pattern for the dimensions of the pants desired.
- cut the suede panels into the desired shapes (being careful to minimize the need to cut over the stitched sections)
- cut matching pattern pieces out of linen to act as a lining for the suede.
- pin and piece together the lining and outer suede and sew them together.
- piece the leather panels to each other to sew them together so they actually resembled breeches
- wrestle with a sewing machine that is sewing through 4 to six layers of suede at a time, as well as the thickness of the lining material
- replace broken needles, since suede is murder on sewing machine needles, even when using leather needles.
- unstitch areas that bunched up, or where the thread was visible and not cleanly pulled, or where thread broke while stitching.
- make a cutaway and stitch a fly so that the pants can be opened and closed (so as to aid putting on and taking off, and of course, answering the call of nature).
- punch holes to place grommets in the waistband (since breeches of this period were tied to the doublet).

That weekend I had set aside to do the whole project, actually gave me enough time to size the suede and cut the strips I would need. That’s all I managed to do in that time, because I discovered a variety of contingent steps needed. I had to get my tools together, determine which of my tools were up to the task, which tools I didn’t even own, and clear space and set up my work area to be effective. These issues took way more time than I anticipated.

How long did it take me to actually make these breeches? When all was said and done, a week. Using whatever time I could carve out, I estimated I spent close to 14 hours getting everything squared away to make these.
Completed pumpkin breeches...
or so I thought at the time.


I liked how they turned out, I thought they looked amazing… that is, until I gave them a test run outside in the heat of the day, and realized that I would probably die of heat exhaustion.

Since the weekend of the event was looking to be very hot (mid 90s, historically speaking, with one year reaching 104 degrees) I realized these pants would be so uncomfortable as to be unbearable. What could I do now? I had put so much time into these, I didn’t have time to start over. Fortunately, research to the rescue. It turns out that there was a style of pumpkin breeches that, instead of being stitched together, had strips of material that acted as guardes, and that hung loosely rather than stitched together. After looking at a few examples, and seeing how they were made, I decided to cut open all of the seams I had spent so much time putting together, and reinforcing the sections at the waist and at the bottom of the leg. It was a long and tedious change, but it allowed air to escape, and me to not die of heat exhaustion.

Jumping a little ahead, but here is the finished
open air version of the pumpkin breeches.


OK, let's talk Coyote Teaching now...

This whole process brought into stark relief the idea of estimating our efforts, and how we, even when we are experienced, can be greatly misled by our enthusiasm for a project.

Was I completely off base to think I could get this project done in a weekend? Turns out, yes! It may have been correct or accurate if I were to be using standard bolt fabric, but I wasn’t. I chose to do something novel, and that “novel” approach took five times longer to complete. What’s more, I had to actually undo much of the work that I did to actually make it viable.

My estimate was dead wrong, even though I had experience making pants and making items to wear. I knew how to sew, I knew how to piece together items, I’ve actually made items, so I felt that gave me a good confidence to make an estimate that would be accurate.

I’ve come to appreciate that, when I try to make an estimate on something I think I know how to do, I am far more likely to underestimate the time requirements needed when I am enthusiastic about the project. In contrast, if I am pessimistic about a project, I am likely to overestimate how long it will take. Our own internal biases, whether they be the “rose colored glasses” of optimism, or the depletion of energy that comes with pessimism, both prevent us from making a real and effective estimate.

Knowing what I know now, how would I consider guiding someone else to do a similar project? I could tell them all of the pitfalls I faced, but those might not be helpful, unless they are doing exactly the same thing I am doing. Most of the time, we are not all doing the exact same thing, and my suggestions may prove to be a hindrance. Knowing now what I know about the process of preparing suede in sections, I would likely walk the person through defining what might be done. In the process, it’s possible they might come up with answers I didn’t ("why are we using real suede, when we can buy suede-cloth that is regular sized on a bolt?". "Couldn’t we just attach the strips to a ready made pair of pants?"). By giving them the realities of issues they might face, or allowing them to think through them on their own, we can help foster avenues and solutions that they would not find on their own, or that perhaps we wouldn’t, either.

One thing is decided, though… the next outfit I make (and yes, be assured, I will make another one ;) ) is going to be made with regular bolts of wool, cotton or linen, rakish good looks be darned.

Monday, June 30, 2014

Weekend Testing Americas is Looking to "Go Deep" This Saturday

Yes, it's been quiet here for quite awhile. Too quiet, and I needed to break the silence. What better way than with an announcement of a Weekend Testing Americas session, you say? I agree wholeheartedly!

With that, I would like to cordially invite each and every one of you reading this to come and join us this Saturday, July 5, 2014.

Weekend Testing Americas #52 - Going Deep with "Deep Testing"
Date: Saturday, July 5, 2014
Time: 09:00 a.m. - 11:00 a.m. Pacific Daylight Time
Facilitator: Michael Larsen


So what does "deep testing" mean? well, if I told you all that now, it wouldn't make much sense to hold the session now, would it? Still, I can't just leave it at that. If that's all I was going to do, I'd just as well have you look at the Weekend Testing site announcement and be done with it.


Justin Rohrman and I were discussing what would make for a good session for July, and he suggested the idea of "deep testing", and in the process, he suggested that we consider a few questions:


What if, instead of just having an unfocused bug hunt, we as a group decided to take a look at a specific feature (or two or three depending on the size of the group) and do what we could to really drill down as far as we could with that particular feature. Heck, why not just dig into a single screen and see what we could find? We have an exercise in the AST BBST Test Design class that does exactly this, only it takes it to the component level (we're talking one button, one dial, one element, period). we're not looking to be that restrictive, but it's an interesting way of looking at a problem (well, we thought so, in any event).


As part of the session, we are going to focus on some of these ideas (there may be lots more, but expect us to, for sure, talk about these:

- how do we know what we are doing is deep testing?

- what do we do differently (thought process, approach, techniques, etc.)?

- how do we actually perform deep testing (hint: staring at a feature longer doesn't make it "deep")?

- how do we know when enough is enough?

If this sounds like an interesting use of part of your Saturday, then please, come join us July 5, 2014. We will be starting at 9:00 a.m. Pacific time for this session, and it will run until 11:00 a.m. Pacific.


For those who have done this before, you already know the procedure. For those who have not, please add "weekendterstersamericas" to your Skype ID list and send us a contact request. More specifically, tell us via Skype that you would like to participate in this upcoming session. We will build a preliminary list from those who send us those requests.


On Saturday, at least 15 minutes before the session starts, please be on Skype and ready to join the session (send us a message to say you're ready; it makes it that much easier to build the session) and we will take it from there.

Here's hoping to see you Saturday.

Wednesday, May 28, 2014

Listening to a Cowboy: Live at Climate Corp, It's BAST!!!

Hello everyone, and sorry for the delay in posting. There's a lot of reasons for that, and really, I'll explain in a lengthy (or small series) of posts exactly why that has been the case. However, tonight, I am emerging from my self imposed exile to come out and give support for Curtis Stuehrenberg and hist tall about "ACCellerating Your Test Planning".

From the BAST meetup post:

"One of the most pervasive questions we're asked by people testing within an agile environment is how to perform test planning when you've only got two weeks for a sprint - and you're usually asked to start before specifications and other work is solidified. This evening we plan on exploring one of the most effective tools your speaker has used to get a test team started working at the beginning of a sprint and perhaps even earlier. We'll be conducting a working session using the ACC method first proposed by James Whittaker and developed over actual practice in mobile, web, and "big data" application development."

For those not familiar with Curtis (and if you aren't, well, where have you been ;)? ):

Curtis is currently leading mobile application testing at the Climate Corporation located in San Francisco, Seattle, and Kansas City. When not trying to help famers and growers deal with weather and changing climate conditions he devotes what little free time he can muster to using his 15 years of practical experience to promote agile software testing and contextual quality assurance at conferences like SFAgile, STPCon, ALM-Forum, and CAST as well as publications like Tea Time for Testers and Better Software magazine.

This is an extension of Curtis' talk from the ALM Forum in April. One of the core ideas is to ask "can you write your test plan in ten minutes? If not, why not?"

Curtis displayed some examples of his own product (including downloading the Climate Corp mobile app by each of us), and brought us into an example testing scenario and requirements gathering session. Again, rather than trying to make an exhaustive document, we had to be very quick and nimble in regards to what we could cover and in how much time we had to cover it. In this case, we had the talk duration to define the areas of the product, the components that were relevant, and the attributes that mattered to our testing.

Session Based Test Management fits really well in this environment, and helps to really focus attention for a given session. By using a very focused mission, and a small time box (30 minutes or so), each test session allows the tester the ability to look at the attributes and components that make sense in that specific session. By writing down and reporting what they see, they are able to document their test cases as they are being run, and in addition, show a variety of areas where they may have totally new testing ideas based on the testing session they just went through, and these in turn inform other testing sessions. In some ways, this method of exploring and reporting simultaneously allows for a development of a matrix that is more dense and more complete than one that may be generated first before actively testing.

the dynamic this time around was more personal and more focused. Since it was not a formal conference presentation, the questions were more common, and we were able to address questions immediately rather than waiting until the talk was finished. Jon Bah's idea of threads was presented and described, and how it can help capture interesting data, but help us consciously stay "on task", yet capture interesting areas to explore later (OK, I piped in on that, but hey, it deserved to be said :) ).

It's been a few months since we were able to get everyone together, and my thanks to Curtis for taking the lead and getting us together this month. we are looking forward to next month's Meetup, and as soon as we know what it is (and who is presenting it ;) ).

Thursday, May 1, 2014

Going "Coyote": Overcoming Fear and Uncertainty with "The Craddick Effect"

For those who have been following my comments about mine and Harrison Lovell’s CAST 2014 talk ("Coyote Teaching: A new take on the art of mentorship") this fits very nicely into the ideas we will be discussing. We’ve been looking back at interactions that we have had over the years where mentorship played an important role in skill development. During one of our late night Skype calls, we were talking about skateboard and snowboard skills, and how we were able to get from one skill level to another.

One aspect that we both agreed was a common challenge was “the fear factor” that we all face. In a broader sense, we both appreciate that snowboarding and skateboarding are inherently dangerous. Push the envelope on either and the risk of injury, and even death, is definitely possible. Human beings tend to work very hard at at unconscious level to keep ourselves alive. The amygdala is the most ancient part of our brain development. It deals with emotions, and it also deals with fear and aggression. It’s our “fight or flight” instinct. One one level, it’s perfectly rational to listen to it in many circumstances, but if we want to develop a technical skill like jumping or riding at speed, we have to overcome it.

About fifteen years ago, I first met Sean Craddick, a fellow snowboarder who was my age and was, to put it simply, amazingly talented. I used to joke whenever I saw Sean at a competition that I would say “oh well, there goes my shot at a Gold Medal!” He humored me the first couple of times, but the third time I said it, he surprised me. He answered “Dude, don’t say that. Don’t ever say that! I could try to throw some trick and land it badly, and scrub my entire run. I could miss my groove entirely, or miss a gate on a turn, or I could catch an edge and bomb the whole thing. Every event is up in the air, and every event has the potential of having an outcome we’d least expect. Don’t say you don’t have a chance, you always have a chance, but you’ll never get the chance if you don’t believe you have it.”

Because we were both the same age and had fairly similar life experiences, I’d hang out with Sean at many of these events, and sometimes run into him on off days when I was just up at the mountain practicing. One time, he noticed that I kept going by a tall rail and at the last moment, I’d veer off or turn and ride past it. After a few times of seeing this, when he saw me about to veer off again he yelled “Hey Michael! The next time you veer of, stop dead in your tracks, unbuckle your board and walk back up here. I want to talk to you about something.” Sure enough, I went down, veered off course, and slammed to a stop. I took off my board, and then I walked up the hill. Sean looked at me and said:

“Take it straight on, and line your nose with the lip and where the beginning of the rail is.”

I nodded, buckled in, and then went down to the rail transition. I veered off. I stopped. I walked back up the hill.

“Lean  back on your rear heel just a bit. It will give you a more comfortable balance when you first get on the rail.”

I nodded, bucked back in, went down again, and again I veered off. I stopped, unbuckled, and walked back to the top of the hill again. By this point I was winded, my calves were aching, my heart was pounding, and I was getting rather frustrated.

“One final thing. Do an ollie at the end of the rail.”

What? I hadn’t even gotten on the rail, why is he telling me what to do when I get off of it? I shrugged, buckled in, went for the hit, and this time, I went straight, I lined the nose up, I set my weight back just a little bit, I slid down the rail, and I did a passably adequate ollie off the end of the rail, and landed the trick. When I did,  Sean whooped and hollered, then came down after me and hit the same rail.

“Awesome, lets go hi the chairlift!”

As we did, Sean looked at me and said:

 “You can understand all the mechanics in the world, but if your brain tells you 'you can’t do it, it’s too dangerous, it’s too risky', you need to get your body to shut your brain up! That’s what I had you do. I knew why you were sketching the last few feet. You were afraid. It felt beyond you. You might crash. It might hurt real bad if you do. The brain understands all that. It wants to keep you safe. Safe, however, doesn’t help you get better. Whenever I find myself giving in to the fear, I stop what I’m doing, right there, and I walk up the hill, and I try it again. If I pull back again, I walk the hill again, and again, and again. What happens is the body gets so fatigued that every fiber of your being starts screaming to your brain ‘just shut up already and let me do this!' Exertion and exhaustion can often help you overcome any fear, and then you can put your mechanics to good use.”

Yeah, I paraphrased a lot of that, but that’s the gist of what Sean was trying to get across to me. Our biggest enemy is not that we can’t do something, but that we are afraid that we can’t do something. That fear is powerful, it’s ancient, and it can be paralyzing. That ultra primitive brain can’t be reasoned with very well, unless we give it another pain to focus on. At some point the physical pain of exertion and exhaustion will out shout the feelings of fear, and then we can do what we need to do.

In a nutshell, that’s “The Craddick Effect”. There may be a much fancier name for it, but that’s how I’ve always approached mentorship where I have to overcome fear and doubt in a person. When some one is afraid, it’s easy to retreat. As a mentor, we have to recognize when that fear is present, and somehow work with it.

You may not do something as extreme as what Sean did with me, but you may well find other, more subtle ways to accomplish the same thing. Imagine having to take on a new testing tool where there’s a lot that needs to be learned up front. We could just let them go on their own and let them poke around. We can take their word that they are getting and understanding what they need to, or we can prod and test them to see what’s really happening. If we see that  they don’t understand enough, or maybe even very little, don’t assume lack of aptitude or drive, look for fear. If you can spot fear, try to coax them in a way that they can put their energy somewhere else for a time so that they can get to a point to shout down the fear. It may be having them do a variety of simpler tasks, still fruitful, but somewhat repetitive and tedious. After awhile, they will get a bit irritated, and then give them a slight push to move farther forward. Repeat as necessary. Over time, you may well see that they have slid past the pain and frustration point, and they just “get” what they are working with. It just clicks.

As a mentor, look to help foster that interaction. As a person receiving mentoring, know that this may very well be exactly what your mentor is trying to do. Allow yourself to go with it. In the end, both of you may learn a lot more about yourselves and your potential than you thought possible. It’s pretty cool when that happens ;).

Wednesday, April 30, 2014

Django 1.7 and… ME (yet another Live Blog)

So this might seem an odd spot, but this has become something of a mission on my part for 2014. I’ve decided that I want to try to become multi-lingual when it comes to web frameworks. We have all sots of interesting frameworks to play with to make web apps and web sites,and Django is the Python centric web framework, and therefore one I want to know more about and get more experience with. Seems a great reason to come into San Francisco and see what the Pythonistas are doing and how they are doing it with Django.

Yes, this is going to be live blogged, and as usual, it may be messy at first. Forgive the stream of consciousness, I promise I’ll clean it up later :).


A bit about our topic this evening (courtesy of Meetup):

Django 1.7 is one of the biggest releases in recent years for Django; several major new features, innumerable smaller improvements, and some big changes to parts of Django that have lain unchanged since before version 1.0. Come and learn about new app loading, system checks, customized select_related, custom lookups, and, of course, migrations. We'll cover both the advantages these new features bring as well as the issues you might have when upgrading from 1.6 or below.


A bit about our presenter this evening (also from Meetup):

Andrew Godwin is a Django core developer, the author of South and the new django.db.migrations framework, and currently works for Eventbrite as a Senior Software Engineer, working on system architecture. He's been using Django since 2007, and has worked on far too many Django websites at this point. In his spare time, he also enjoys flying planes, archery, and cheese.


Lightning Talks 

#1 Randall Degges - Django & Bcrypt

Randall kicked things off right away with a talk about how Django does password hashing and securing of passwords, with the estimated cost o what it takes to crack a password (hint, it's not that hard). If you want to be more security alert, Randall is recommending that we consider using BCrypt. It's been around awhile, and it allows for transparent password upgrading (users update their hash the first time they log in. No muss no fuss :). Sounds kinda cool, to tell the truth, I'm looking forward to playing with it for a bit.

#2 Venkata - Django Rest Framework w/ in-line resource expanding

The second talk discussed a bit on the Django REST framework. Some of the cool methods to handle drop down, pop open and other events were very quickly covered, and some quick details as to what each item can do. A quick discussion with fast flashes of code. I caught some of the details, but I'll be the first to admit, a lot of this flew right past me (gives me a better idea as to areas I need to get a little more familiar with). Granted, this is a lightning talk, so that should be expected, but hey, I pride myself on being able to keep up ;).

#3 Django Meetup Recap

The third lightning talk basically covered a recap of what the Django group has been covering and some quick recaps of what has been discussed in the previous meetups (Ansible, Office Entrance Theme Music, Integrating Django & NoSQL, etc.). Takeaway, if we want resources after the meetups are over, we have a place to go (and I thank you for that :) ).

---

Andrew Godwin's Talk

This seems like a great time to say that I'm relatively new to Django, so a lot of what's being discussed is kind of exciting because it makes me fgeel like I'll be able to get into what's being offered without having to worry about unlearning a lot of things to feel comfortable with the new details. Part of the new code is an update to South (which, as is mentioned above, is something Andrew is intimately involved in).

Details as to how apps are loaded and how to check for and warn programmers about what may happen with an upgrade. Having suffered through a few updates where things worked, then didn't and not having any clue as to why, this is very appealing.

Another new aspect is an adjustable and tunable prefetch option, so that instead of all or nothing, there's a spectrum of choices that can be looked up and help based on context.

A rather ominous slide has flashed across saying "Important Upgrade Notes", and a new detail is that all field classes need to have a deconstruct() option. It's now a required method for all fields. Additionally, initial_data is dead. It's important to have modules use data migration instead. In short, don't automatically assume that older modules that use initial_data will cleanly work. I will take Andrew's word on that ;).

So what's coming up in Django 1.8? Definitely improvements in interactions with PostGreSQL, as well as migrations for contributing apps. But that's getting a bit ahead of the race at the moment. Expect Django 1.7 to hit the scene around May 15th, give or take a few days. Again, I will take Andrew's word on that ;).

---

There's no question, I feel a little bit like a fish out of water, and frankly, that's great! This reminds me well that there is so much I need to learn, especially if my goal of becoming a technical tester is going to advance farther than just wishful thinking or following pre-written recipes. It's not enough to just "know a framework" or "know my framework".

As was aptly demonstrated to me a year and a half ago, I spent a lot of time in the Rails stack, and then I went to work with a company that didn't user Rails at all. Did that mean all that time and learning was wasted? Of course not. It did give me a way to look at how frameworks are constructed and how they interact. I'm thinking of it like learning Spanish when I was younger. Don't get me wrong, I'm not great shakes when it comes to Spanish, but I understand a fair amount, and can follow along in many conversations. What's really cool is that that gives me an add on benefit that I can follow a little bit in both French and Italian as well, since they are closely related. that's how I feel about learning a variety of web frameworks. The more of them I learn, the easier it will be to move between them, and to understand the challenge that they all face.

In any event, this was an interesting and whirlwind tour of some new stuff happening in Django, and I plan to come back and learn more, with an eye to understanding more next time than I did today. Frankly, that shouldn't be too hard to accomplish ;).


Thanks for hanging out with me. Have a good rest of the evening, wherever you are.


Friday, April 25, 2014

TECHNICAL TESTER FRIDAY - Getting UnGraphical with lynx and grep

Use lynx --dump to retrieve the contents of your Web site. Just hardcode all the page URLs. Redirect all the content to flat files, then use grep to look for patterns in your content. Start by looking for mistakes you commonly make. Save your greps in a file.

Wow, now this brings back some memories :). 

I first loaded up a lynx browser back in 1993, and this was my introduction to what the non-graphical World Wide Web looked like. Truth be told, I fairly quickly abandoned lynx as an everyday platform when both NCSA Mosaic and the first version of Netscape came out, but there is indeed a value to using lynx. It’s a nice tool to add to accessibility tests, so that you can see what your super pretty graphical page looks to those who don’t have that option. For those curious... it looks like this (well, mine looks like this):

Yep, that's what the Web looked like in 1993. Cool, huh?


lynx —dump does exactly what it sounds like.

Here’s an example from my own little site project:

lynx —dump http://127.0.0.1/web/orchestra/index.php

This prints the following to the screen:

Adding a redirect tag (‘>’) puts it in a file for us. repeat a bunch of times, and you can pull down details on every page in your site.

OK, cool, that’s interesting, but what does that do for us? It allows us to go through and pull out data that we’d want to analyze. Granted, the site as it exists right now isn’t all that spectacular, but it does give us a basis for how we can construct some simple greps. 

For those not familiar with this tool, "grep" is an old UNIX standby. The term comes from the syntax of the “ed” editor, and the command that was used was g/re/p (or “globally search for a regular expression and print it to stdout”).  Those of you with Windows machines can download Grep for Windows at http://gnuwin32.sourceforge.net/packages/grep.htm, or you can find a variety of fun an interesting versions. For me, since my system is in a virtual environment, I'm just going to save the files to my shared folder space and play with grep on my Mac :).

The main benefit to using grep is to look for things that show up in your pages that you may find interesting, or things that might be errors. Searching for basic strings in file names can show a lot of interesting details in the content of the pages. As a quick set of examples that we can do using grep, I recommend poking around on this page for 15 command examples in ways you can use grep to get interesting data.

Once you find a few greps that you find useful, it's a good idea to save those in a file so that you can run them over and over again as you add content to the site and get more information to mine from your site.  

This is meant to be a really basic first step in getting into the details of what your pages show and help get you away from using the browser as a main interaction source. Yes, there's a lot that can be done just with the files and the content that is in them. How you choose to look at them and what interesting details they show will be my focus for next week.

Wednesday, April 23, 2014

A Leaner and Cleaner Codecademy

A couple of years ago, I posted that I was excited to see the initiative that would become Codecademy get of the ground. At the time, it was limited in what it offered. It featured a course on JavaScript and some other small project ideas, and after a little poking around, I went on to other things. A year later I came back and saw that there was some new material, this time on Ruby and Python. A little more poking and then I went on to do other things.

I made a commitment to roll through Noah Sussman's "ways to become a more technical tester", which I follow up on each Friday in my TECHNICAL TESTER FRIDAY posts.. In that process, I decided it would be good to have a place that novice testers could go and learn some fundamentals about web programming. With that, I decided to go and give Codecademy another look, and I'm glad that I did.

For starters, Codecademy has refreshed everything on the site. They talk about it at length in "Codecademy Reimagined", and I for one am impressed with the level of depth they went into the describe the changes.

It's opened up a number of courses and updated several of their older offerings. The original JavaScript track has been deprecated (but it is still there if you want to work through it), and a new JavaScript track has been put in place. The site has been augmented with a jQuery track, a freshened HTML/CSS track, and updates to Ruby, Python, and PHP tracks as well.

In addition, there are several small project areas that users can practice and make "Codebits" to show what they have learned. Some of the Codebits are already assembled (examples include animating your name, making a solar system model and a simple web site template, as well as open format Codebits that users can share. Additionally, there are also a variety of projects ranging from novice to intermediate and advanced levels so that you caa practice what you are learning.

Another cool section is the API track. Currently, there are 29 API's listed that users can experiment with, and make their applications so that they can interact with the various API's. the offering range from YouTube to Twitter to Evernote, and also show the languages best suited to using that particular API's (JavaScript, Ruby and Python).

So how's the actual learning process? It's pretty solid, to tell the truth. Each track has a variety of initiatives, and a range of lessons and small projects interspersed throughout to keep the participant's attention. The editor can be finicky at times, but usually a page refresh will solve most of the odd problems. One of the nice attributes of having an account and working through the exercises is that your progress is saved. All of the steps from the first lesson to the last are recorded as part of your progress. That means you can go back and see your "cleared" examples and exercises.

Additionally there are Q&A Forums associated with each project, and so far, even when I've been stuck in some places, I've been able to find answers in each of the forums thus far. Participants put time in to answer questions and debate the approaches, and make clear where there is a code misunderstanding or an issue with Codecademy itself (and often, they offer workarounds and report updates that fix those issues). Definitely a great resource.  If I have to be nit-picky, it's the fact that, often, many of the Q&A Forum answers are jumbled together. Though the interface allows you to filter on the particular module and section by name, number and description, it would be really helpful to have a header for each question posted that says what module the question represents. Many do this when they write their reply titles, but having it be a prepended field that's automatically entered would be sweet :).

Overall, I think Codecademy has come a long way from when I first took a look at it about two and a half years ago. They have put a lot of effort into the site and their updates, and it shows. If you are already playing around at Codecademy, you already know everything I've written here. If you haven't been there in awhile, I recommend a return trip. It's really become a nice learning hub. If you have never been there, and are someone who wants to learn how to program front end and back end web apps, etc., and you like the idea of FREE, then seriously, go check the site out and get into a track that interests you. I'd suggest HTML/CSS, JavaScript, and JQuery first. From there, if you'd like to focus just on making web sites with little in the way of entry criteria, then check out the PHP track, otherwise, branch out into the Ruby or Python tracks, and work through the site at your own pace. It's not going to be the be all and end all destination to learn about programming, but seriously, you can make a pretty big dent with what you can learn here.

Tuesday, April 22, 2014

Selenium SF Live: An Evening With Dave Haeffner

It’s been about three years since I first met Dave. He was, at the time I met him, working with the Motley Fool, and was one of the people I connected with and recorded some fun (albeit rather noisy) audio for what I had hoped would be a podcast from the Selenium Conference in 2011. Alas, the audio wasn’t as usable as I had hoped for a releaseable podcast, but I remembered well the conversation, specifically Dave’s goal to see if he could, at some point, find a way to make Selenium less cryptic and more sturdy that what had been presented before.

Three years later, Dave stands as the author of “The Selenium Guidebook” and tonight a couple of different Meet-up groups (San Francisco Selenium Users Group and the San Francisco Automated Testers)  are sharing the opportunity to bring Dave in to speak. I’ve been a subscriber to Dave’s Elemental Selenium newsletter for the past couple of years, and I’ve enjoyed seeing how he can break down the issues and discuss them in a way that is not too overbearingly technical, and give the reader a new idea and approach they might not have considered before. I’m looking forward to seeing where Dave's head is at now on these topics.

Here's some details about Dave for those of you who are not familiar with him:

Dave Haeffner is is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.


This will be a live blog of Dave’s talk, so as always, I ask your indulgence with what gets posted between the time I start this and the time I finish, and then allow me a little time to clean up and organize the thoughts after a little time and space. If you like your information raw and unfiltered, well, you’ll be in luck. If not, I suggest waiting until tomorrow ;).

---

The ultimate goal, according to Dave, is to try to make tests that are business valuable, and then do what you can to package those tests in an automated framework that allows you to package up these business valuable tests. This then frees the tester to look for more business valuable tests with their own eyes and senses. Rinse, lather, repeat.

The first and most important thing to focus on is to define a proper testing strategy, and after that's been defined, consider the programming language that it will be written in. It may or may not make sense to use the same language as the app, but who will own the tests? Who will own the framework? If it's the programmers, sure, use the same language. If the testers will own it, then it may make sense to pick a language the test team is comfortable with, even if it isn't the same as the programming team's choice.

Writing tests is important, but even more important is writing tests well. Atomic, autonomous tests are much better than long, meandering tests that cross states and boundaries (they have their uses, but generally, they are harder to maintain). Make your tests descriptive, and make your tests in small batches. If you're not using source control, start NOW!!!

Selenium fundamentals help with a number of things. One of the best is that it mimics user actions, and does so with just a few common actions. Using locators, it can find the items that it needs and confirm their presence, or determine what to do next based on their existence/non-existence. Class and ID are the most long term helpful locators. CSS and X-Path may be needed from time to time, but if it's more "rule" than exception, perhaps a chat with the programming team is in order ;). Dave also makes the case that, at least as of today, the CSS vs. XPath debate has effectively evened out. Which approach you used depends more on what the page is set up and laid out to be rather than one approach over the other.

Get in the habit of using tools like FirePath or FireFinder to help you visualize where your locators are, as well as to look at the ways you can interact with the locators on the page (click, clear, send_keys, etc.). Additionally, we'd want to create our tests in a manner that will perform the steps we care about, and just those steps, where possible. If we want to test a login script, rather than make a big monolithic test that looks at a bunch of login attempts, make atomic and unique tests for each potential test case. Make the test fail in one of its steps, as well as make sure it passes. Using a Page Object approach can help minimize the maintenance needed when pages are changed. Instead of having to change multiple tests, focus on taking the most critical pieces needed, and minimize where those items are repeated.

Page Object models allow the user to tie selenium commands to the page objects, but even there, there's a number of placed where Selenium can cause issues (going from Selenium RC and Selenium WebDriver made some fundamental changes in how they handled their interactions). By defining a "base page object" hierarchy, we allow for a layer of abstraction so that changes to the Selenium driver minimizes the need to change multiple page object files.

Explicit waits help time-bound problems with page loading or network latency. Defining a "wait for" option is more helpful, as well as efficient. Instead of hard coding a 10 second delay, the wait for allows a max length time limit, but moves on when the actual item needed appears.

If you want to build your own framework, remember the following to help make your framework less brittle and more robust:
  • Central setup and teardown
  • Central folder structure
  • well defined config files
  • Tagging (test packs, subsets of tests (wip, critical, component name, slow tests, story groupings)
  • create a reporting mechanism (or borrow one that works for you, have it be human readable and summable, as well as "robot ready" so that it can be crunched and aggregated/analyzed)
  • wrap it all up so that it can be plugged into a CI server.

Scaling our efforts should be a long term goal, and there are a variety of ways that we can do that. Cloud execution has become a very popular method. It's great for parallelization of tests and running large test runs in a short period of time if that is a primary goal. One definitely valuable recommendation: enforce random execution of tests. By doing do, we can weed out hidden dependencies. find errors early, and often :).

Another idea is "code promotion". Commit code, check to see if integration passes. If so, deploy to an automation server. If that works, deploy to where people can actually interact with the code. At each stage, if it breaks down, fix there and test again before allowing to move forward (Jenkins does this quite well, I might add ;) ). Additionally, have a "systems check" in place, so that we can minimize false positives (as well as near misses).

Great talk, glad to see you again, Dave. Well worth the trip. Look up Dave on Twitter at @TourDeDave and get into the loop for his newsletter, his book, and any of the other areas that Dave calls home.