Friday, February 28, 2014

TECHNICAL TESTER FRIDAY: Immersing in PHP and a Little Each Day

For some reason, this picture just sums up
my past two weeks perfectly ;).
Last week was a crunch time at work, and a need to take care of something really important took me away from a timely update, so this is sort of a two in one post.

First, I've been looking at a variety of resources available online to learn about and practice using PHP. PHP can do a lot of interesting things and display output, pull in a variety of information sources, and simplify some tasks, but to do that, there needs to be a fair amount of tinkering involved.

Second, just like HTML all by itself will not give a web site a nice look and feel, PHP will not be the be all and end all of interactivity, either. Setting a site up from scratch means that there is a fair amount of interplay to work out, configuration details to tweak, and a lot of refreshing to see changes. Also, without a back end database, much of what is being done in the pages is superficial and not very interesting, though it does help to hammer out syntax.

It's a small victory, but hey,
I'll take it!
I've finished the Codecademy modules that cover PHP (YAY ME!!!) .

There's some oddity with their interface when it comes to competing certain assignments and exercises. I have found myself not able to complete a module that I have been actively working on, even though the "code" is correct for the context. I have also closed down my browser, reopened it, gone back to the section I was just working on click save again, and gotten a "Success".

Why do I mention this? Because I'm willing to bet others might be struggling with some examples and scratching their heads wondering why Codecademy isn't accepting their results. Often there are typos, and those are easy to fix, but if you find yourself in a spot where you cannot get it to work, no matter what you do, try closing your browser and coming back to the module and saving again.

Noah stated in the initial comment that we should be prepared to spend a few weeks on this initial project, and to be open to the fact that we will be doing a lot of mucking around to get it to work. Just being able to manipulate pictures would be considered a positive milestone. I thought this would be relatively quick; I mean, how hard could it be to just put up a simple site with PHP? The point is not to just put up a site with PHP; there's lots of ways to superficially do that. The image manipulation challenge is what sets it apart. Noah gives us an authentic problem, and asks up to solve it, without guidance as to how or what to use to do it.

This process led me down several paths and experiments. I set up a local stack on my personal machine. I set up a LAMP server in a virtual space. I set up a site already on the open web to use PHP and experimented with commands and syntax. I rolled several pages of my own to see how it all fits together. I downloaded a ready packaged "template" to get some ideas and save me some keystrokes. I swapped ideas between the home grown pages and the template pages. In short, I tried things, I tweaked them, I went down several dead ends, and I predict I'm going to go down several more.

One thing I learned from my time when I was releasing a podcast every week was that I had to learn just how long it would take to do something. At first, I was wildly over optimistic. I figured my skills with writing music and doing audio editing would make doing something as simple as editing a podcast a breeze. A clip here, a snip there and all would come together. If I wanted to have a slap-dash product, with little regard to the end experience of the listener, that was true. It took little time at all to edit a program that sounded hacked and choppy, but hey, it got the main points across. To make it sound good, to make the audio flow naturally, to remove pause words (the ums, ahs, likes and you knows) and to make the transition sound smooth and clean, to preserve the natural narrative so that the interviews and programs were comfortable to listen to, took a considerable amount of time to do.

I realized I couldn't put in a four hour editing session and have a product that sounded good, but I could put in four one hour editing sessions spread over several days and make a podcast that sounded great. The difference? Spreading out the effort is vital, because discernment and clarity come with repeated practice, and some down time to let the brain reflect offline. We don't get that same level of clarity when we try to push everything into one night to put it all together. My mistake has been more of the latter and less of the former. When we do that, we seek short cuts. We look for quick hacks that "work", for some definition of "work". Our standards for what is "acceptable" go way down, and we repeatedly say "oh heck with it, I have it working, it's good enough". Doing it a little bit at a time, and coming back to reflect on what we are doing, lets us see things that could be done better, and that we realize we really can do better, and without a lot of extra pain and effort. Last week, I tried to put it all together at one time, and was frustrated. This week, I managed a little more spacing, and got closer to a level of skill that I could feel like I'm doing something useful, but I know there's lots more I need to do to even have something basic in place.

So yeah, the past two weeks have been hectic, scattered, and less focused and more "bunched up" in my efforts than I want them to be. It feels like how I see many programmers having to work because of issues and changing priorities, and I have a greater empathy for them and what they go through, even to meet my own arbitrary "deadlines". If that is part of the "lessons learned" that Noah wants to encourage, I think it's working very well.

A Weekend Testing Follow Up: Ubertesters Wants to Talk To You :)

Earlier in February, the Weekend Testing Americas chapter held a session for and with an app called Ubertesters. This is a wrapper/SDK around an app that can allow for people on mobile devices the ability to report what they see and send in their feedback to programmers and stakeholders without having to move to a different machine.

One of the benefits of Weekend Testing is that we also get feedback from the organizations providing the apps to test. This was shared with me after the session:

I was amazed with the passion of your team to testing and new technologies and tried to stay on track on the way to the way to the airport. I carefully read all the comments and feedback and appreciate them a lot. All the members did a great job, and it will help us to improve Ubertesters user experience for sure.

Additionally, Ubertesters contacted me and asked if I'd be willing to share this with the Weekend Testing community, and those who are part of this "passionate" group of testers. I said I'd be happy to.

From Ubertesters:

'Would you like to become part of a global testing provider and join Ubertesters team ( 
Ubertesters is announcing enrollment to their team for testers who are passionate about testing, want to take part in various interesting projects and make some money in the process. For more details, please, contact'

I think this is pretty cool, and it's becoming part of a neat trend I've seen as of late. When the product owners and stakeholders of products take part in the sessions, they see and learn a great deal. Not just about their product in action, but their product in action in the hands of testers who really care about their craft. I've had a number of people who have participated in sessions say afterwards that they've been contacted by stakeholders and asking them if they'd be interested in talking to them for further opportunities. This is a continuation of that, and it's a result I"m happy to see.

To those who would like to follow up with Ubertesters, a favor. If you do, please mention you are coming to them via Weekend Testers. I can't guarantee that that will give you a better chance of getting in on what they are looking to do, but judging from the feedback and the direct request, I'd say the odds are pretty good ;).

Again, thanks to all who participate in these monthly events. Your energy and enthusiasm is what makes them worthwhile and fun to do. Additionally, as I hope the above illustrates, it's noticed.

Tuesday, February 25, 2014

Test Retreat 2014: Why I'm Going, and Why You Should, Too :)

First, I need to paraphrase this with something you will be seeing a lot from me in the coming weeks:

CAST 2014 will be held in New York City August 11-13, 2014.

I will be giving a talk along with Harrison Lovell. Details on this will follow, but not until it gets officially posted.

I want to see as many of you as possible come attend CAST 2014, because I feel it is one of the best, if not THE best, software testing conferences a software testing practitioner can attend and come away with real value for their time and investment. As I said previously, you will see me doing a lot more commentary and promotion for CAST going forward.

Having said all that, I want to talk about something that is a preamble to CAST, and looks to be turning into an annual event that I support and want to see thrive. That event is called "Test Retreat".

Test Retreat is rapidly becoming one of my favorite events to attend each year. This year it will be held Saturday, August 9, 2014 from 8:30 AM to 4:30 PM (EDT). I attended the inaugural event in 2012, when it was introduced at CAST 2012 in San Jose, CA and then in Madison, WI during CAST 2013. 

What makes Test Retreat worthwhile? 

Many other events allow a select few to present on ideas that have to be highly structured. The Open Conference option that Test Retreat offers allows me, and others, to present ideas that might be very preliminary and embryonic. Through the event, these preliminary ideas often develop into calls for action, with input from many other participants, that are significantly better than anything I would have proposed on my own. It's this rich level of interaction, conferring with peers, and all willing to work together to develop "better ideas" that make this format a success. Several of my better talks (Let's Stop Faking It, Balancing ATDD, GUI Automation and Exploratory Testing, and others) that I have given at Meet-Ups, conferences and have written up as published articles and papers have had their genesis in Test Retreat.

For those wondering if it makes sense, or if it's worth it to attend a Saturday event (yes, I know Saturdays  are precious), I say "yes"! So far, it has proven to be every bit worth it these past two years. I've already signed up for year three. 

Will I see you there? Will we, perhaps, come up with better ideas together than we would separately? I hope you will come attend and find out!

Wednesday, February 19, 2014

Book Review: A Web For Everyone

When I started working at Socialtext, I came in right at a time when we were working on a large Accessibility project. For those not familiar with the term Accessibility, it’s the variety of standards, tools, and devices that collectively allow for individuals with disabilities to get access to the information, either on their systems or on the web, and interact with it as seamlessly as everyday users that do not have disabilities. 

There are several standards that can be referenced and used as starting points for understanding accessibility, and a variety of tools, both free and commercial, exist to help the programmer and tester address accessibility issues, create fixes, and test them to see if they work as intended. Over the past year, Accessibility has become a focal point of my testing practice, one I didn’t spend much time thinking about or doing prior to working here.

While it’s important to understand accessibility, it would be even better if more people  were to give thought to accessibility and testing for accessibility in their design decisions, and early in the process make the case that Accessibility to all users (or as many as possible) is an important part of our mission as product owner, creators, designers and testers. Sarah Horton and Whitney Quesenbery approach this challenge and this mission with their book “A Web For Everyone”. More than just ways to code and test for accessibility, this book attempts to help anyone who creates software applications to develop the understanding and the empathy necessary to make design decisions that truly help to make "A Web for Everyone" possible.

So how do Horton and Quesenbery score on this front?

Chapter 1 lays out the case for why we all should consider creating A Web for Everyone, and using the  “Principles of Inclusive Design” (POUR) at the outset. Inclusive design can be seen as the cross section of good design, usability and accessibility. The Web Content Accessibility Guidelines (WCAG) 2.0 standard is introduced, which encompasses various accessibility standards in use in the U.S., the U.K., the European Union, etc. Pour uses seven principles for helping design products that work for the widest range of abilities. Equitable, Flexible, Simple, Intuitive, Perceptible, Tolerant and Considerate of Space and Effort make up the core of POUR. This chapter also focuses on Design Thinking, which emphasizes understanding the human needs first, rather than letting the technology dictate the scope or direction. Combining WCAG, POUR, universal design, and design thinking, starting with the user experience, we can make great strides in designing sites and applications that allow for the greatest possibility of use by a broad variety of users and ability levels.

Chapter 2  introduces us to the idea of People First Design by introducing us to eight different people. To those unfamiliar with the idea of “personas”, this is a great introduction, and a really nice modeling of the idea. Rather than make eight abstract personas, the book outlines eight people in quite specific detail; their physical and cognitive abilities, their skill with technology and understanding/knowledge, and attitudes (motivation, emotions & determination) are fleshed out so that we can relate to them, as well as their unique challenges and abilities/disabilities. By having so much detail, we can empathize with them as though we actually know them. Personal interjection here; even with this level of detail, personas are, by necessity, incomplete. They are stand-in’s for real people. While we will have to “fill in the blanks” for a fair number of things, the more complete a persona we can make and identify with, the more likely we will be able to consider their interaction and design for them to be effective.

Chapter 3  sets the stage for the idea of “Clear Purpose”. Clear Purpose starts with understanding our audience, putting “Accessibility First” to ensure that the broadest group of people can use the product effectively. Emphasis on universal design and equivalent use are more desirable than accommodation, since accommodation usually makes for a less fulfilling experience.

Chapter 4 focuses on Solid Structure. An emphasis is placed on the various markup and presentation options including the options associated with HTML, HTML5 and WAI-ARIA, separating the content from the presentation (yes, CSS is useful for accessibility as well as for eye candy) and optimizing content to be organized in a way that screen readers and assistive technologies can get to the the most important content first. Most important, sites with well defined structure help to remove barriers, and give users confidence they can find what they need when they use a site or an application. 

Chapter 5  covers Easy Interaction. By focusing ion making interactions easy for those with  disabilities, we go a long way in developing a product that is easier to interact with for everyone.  Focusing on keyboard interactions, having codes in both HTML and CSS that leverage assistive technologies to provide more details or outline areas where the keyboard has focus, or to speak to the user where the keyboard focus currently is. Easy interaction enables users to control the interface, with large enough controls. It avoids taking unexpected actions for users that they can do on their own. Easy interaction also includes both preventing and handling errors in an accessible way.

Chapter 6 is all about orientation and navigation, or as the book puts it, “Helpful Wayfinding”. This consists of being consistent with design elements, mapping items similarly on pages so that the look and feel remains consistent, differentiating where it really makes sense to differentiate, and include information as to where a user actually was in the site (think of the bread crumb trail we often take for granted). ARIA roles work with HTML and HTML5 tags to help define where on the page the user is to that assistive technology can reference those rage and provide meaningful alerts and signposts. It also gives some good suggestions as to how to pattern links and navigation items, such as using action words as links, presenting links in an obvious and consistent way, including images as clickable elements, keeping the navigation process simple, and resisting the temptation to bury content in layers of submenus.

Chapter 7 focuses on Clean Presentation, or placing a focus on the visual layout, images and fonts for easy perception. To get the best effect from this, though, Clean Presentation looks to take into consideration a variety of visual disabilities, as well as allowing users the ability to customize the look and feel, using native browser options or physical controls in the site or app itself (think of the font enlarging/shrinking of a Kindle eBook as an example). Stylesheets can help considerably in this regard, and can allows the visuals to me modified both at the user preference level and at the device level (laptop, vs. tablet. vs. phone). Making a contrast between the text and background, font size, style, weight, and spacing, as well as the contrast for images can also help make a site more usable.

Chapter 8 gets to the heart of the matter regarding “Plain Language” Not to be confused with “summing down” content, but writing to the intended audience with words and terms they will understand. Plain language also helps drive content presentation. Clear headings, columns to break up text, small paragraphs, bullet points and use of bolding, italics and links. In general, make it a point to read your site regularly, and incorporate your personals to see what they would think of your presentation.

Chapter 9 covers Accessible Media. Much of what we post requires visual cues. Images and audio/video are the most obvious, and there are differing challenges depending on the individual and the disability. For sighted users, image rich sites can be little more that large blank canvases with a  few words here and there. Audio files cannot easily be consumed by those who cannot hear. This is where  aspects like alt tags for image, closed captioning for video files, transcripts for audio files, and other ways to describe the content on the page will goa long way towards letting more people get involved with the content. 

Chapter 10 brings us to Universal Usability, or put in a different way, a focus on a great user experience for all can go a long way towards helping incorporate many accessibility ideas for everyone. Technology should not have to be fought with to get a job done, or to accomplish a goal. Well designed sites and apps anticipate user interaction, and help guide them to their completion, without requiring a lot of redirection or memorizing of options to get a task completed. A popular phrase is “Don’t make me think!” the site or app should allow the user to focus on their goal, not trying to figure out how the site or app needs to have them achieve it. 

Chapter 11, In Practice, looks to unify the concepts in the book into a holistic approach.  More than just using the techniques described, the organization as a whole needs to buy into the value of accessibility as a lifestyle and a core business goal. Start by evaluating your current site and seeing where you re doing well and where changes would be valuable. Determine what training, skills, people power and infrastructure will help you get from A to B. Also, while personas are a great heuristic, they do not take the place of flesh and blood people dealing with the various disabilities we want to help make our software accessible to. Look to interact with and develop relationships with real people who can give a much clearer understanding of the challenges, so that you can come up with better solutions.

Chapter 12 considers The Future, and what A Web for Everyone might look like. Overall goals include a web that is ubiquitous, where accessibility is part of design from the ground up. Another goal is flexibility as a first principle of design. Our interaction should be invisible, or at least stay out of our way as much as possible. Methods in which we will get to that place will involve being more inclusive and diverse in our understanding of who uses our products, and how they use them. We’ll need to make accessibility part of the way we think, not as an afterthought after we’ve done everything else.

The book ends with three appendices. Appendix A is a brief listing of all the principled discussed in each chapter, and would make for an excellent starting point for any story workshop or test design session. This is a great set of “What if?” questions for software testers. Appendix B is a summary of the WCAG 2.0 standard and how the various sections map to the chapters of the book. IF you want to get into the nitty-gritty details of the spec, or get access to other links and supporting documentation, that’s all here. Appendix C gives a lengthy list of additional reading (book, links, etc.) about the topics covered in the book. If you want to know more about any specific area or heading, check here.

Bottom Line:

"A Web For Everyone" gives a lot of attention to the the personas and real world examples of accessible design, which are interspersed through all of the chapters. We see their plights, and we empathize with them.  These persona examples are humanizing and very helpful. They help us see the what and the why of accessibility. We see many interesting design ideas shared, but we get only a few examples of the how. Implementation is discussed, but only a few ideas are fleshed out beyond the basic prose. This is not necessarily a criticism, because A Web for Everyone does a really good job explaining the what’s and the why’s of accessibility design. High level design ideas and technology briefs are handled quite nicely. A variety of coding examples to show the ideas in actual practice, not so much. If you are looking for a guide to coding sites for accessibility with exercises and examples to create, this isn’t that book. If, however, you want to get inspired to make, test or promote software that can be used and usable by a broader variety of people, this book does an admirable job. 

Tuesday, February 18, 2014

There's Always Something You Didn't Consider

This past weekend, I had the honor and pleasure to celebrate three new Eagle Scouts in my Troop at a Court of Honor that we held for them this past Saturday. Since the boys in question were all associated with Order of the Arrow, they have the right to have a special "Four Winds" ceremony performed for the. Since I'm associated with the Dance Team that our O.A. Lodge hosts (my primary role in O.A. is "Dance Team Advisor"), I figured it would make sense to present our Dance Team's version of this presentation.

Our Dance Team ceremony is pretty well known and regarded. We have a recorded narrative that mixes in spoken word and Native American Pow Wow songs, as well as ambient background music. We mix in multiple dance styles, representing both female and male dancers and dance styles (typically Jingle Dress, Fancy Shawl, Fancy Dance, and Grass Dance). The outfits that we have are elaborate, and they take a lot of time to put together, put on and take off. The preparation time can often take 45 to 60 minutes for a presentation that rarely last longer than fifteen or twenty minutes. Thus, my goal has been to engineer the process so that the materials can be put together quickly, taken apart quickly, and most important, put on and taken off quickly. To this end, I modified all of the clothing items I could to use side clip fasteners, and make them as adjustable as possible. I cut out the back of an old school backpack and attached it to the top cape of the dance outfit so that the neck bustle could be more easily put on and taken off. To make the fancy dance outfit even easier, I stitched the "angoras" (lower leg decorations that are made from sheep's hair) and the dance bells together into one piece, with the side clips and webbing to make them super easy to take on and off. I tested them on me, and on another analog (a younger scout) and figured it would work well for all concerned.

I'm guessing some of you already know where this is going, don't you ;)?

The day of the performance, we get everyone together, and I assemble everything and show them how to get into and out of the gear. Everything works flawlessly... except for one thing. The angoras for the fancy dance outfit and the wrap sleeve, side-ring clips, and webbing, even when closed down to the absolute tightest level, were still loose on the scout doing the dance. I had figured I'd covered the skinniest possible kid I could think of. Truth be told, no i hadn't, and here he was, right in front of me, wondering what to do. I told him to grab a pair of bandannas and tie them below the bells to add some extra support and pressure. When he went out to dance, even with the added support of the bandanna, one of the set of bells and angoras started sliding down his leg. At this point he looked at me with a mix of bewilderment and horror... "what do I do now?!" The only answer I could telegraph to him was "keep going". He saw that stopping to adjust was not an option, so he adapted his steps to minimize the view of the drooping bells, and after his performance was finished, he went to the area where he was to "stand as sentry" and stood still while the rest of the performers did their parts.

Afterwards, many of the attendees walked up to the dancer and congratulated him on an excellent performance. Not a one of them mentioned the "mishap", though his Mom later pointed out that he looked to be struggling, but adapted effectively under the situation. I learned that as we get closer to the end of a project or a hard deadline, we sometimes make totally innocent lapses in our thinking, and make choices that seem to be perfectly rational, but miss something important. Sometimes these events can be embarrassing, but at the same time, I told the boy in question "sure, the bells drooped, and you couldn't put on a "perfect" performance. On the other hand, of all the boys in the Troop and the Lodge that could have been out there, you were the one who actually suited up to dance. Most people will not remember that his bells drooped. They will remember that he stepped up and did something hard, something intricate, and did a pretty darned good job.

We had a quick "retrospective" on the event, and we all talked about what we could do to make it work better the next time. I got some valuable feedback on the attachment designs I used, and how to modify them to make them even more effective and with a broader range for use. Most of all, though, I was reminded that, no matter how hard you try, no matter how much ground you cover, there's always something you didn't consider.

Friday, February 14, 2014

Book Review: The Modern Web

I am zeroing in on clearing out my back log of books that came with me on my flight to Florida. I have a few more to get through, some decidedly "retro" by now, and a few that some might find amusing. NoStarch publishes "The MangaGuide to..." series, and I have three titles that I'm working through related to Databases, Statistics and Physics. Consider these the "domain knowledge in a nutshell books", and I'll be posting them in a couple of weeks). With that out of the way ;)...

The web has become a rather fragmented beast these past twenty some odd years. Once upon a  time, it was simple. Well. relatively simple. Three-tiered architecture was the norm, HTML was blocking, some frames could make for structure, and a handful of CGI scripts would give you some interactivity. Add a little JavaScript for eye candy and you were good. 

Now? there’s a different flavor of web framework for any given day of the week, and then some. JavaScript has grown to the point where we don’t even really talk about it, unless it’s to refer to the particular library we are using (jQuery? Backbone? Ember? Angular? All of the above?). CSS and HTML have blended, and the simple structure of old has given way to a myriad of tagging, style references, script references, and other techniques to manage the miss-mash of parts that make up what you see on your screen. Oh yeah, lest we forget “what you see on your screen” has also taken on a whole new meaning. It used to mean computer screen. Now it’s computer, tablet, embedded screen, mobile phone, and a variety of other devices with sizes and shapes we were only dreaming about two decades ago.

Imagine yourself a person wanting to create a site today. I don’t mean going to one of those all-in-one site hosting shops and turning the crank on their template library (though there’s nothing wrong with that), I mean “start from bare teal, roll your own, make a site from scratch” kind of things. With the dizzying array of options out there, what’s an aspiring web developer to do?

Peter Gasston (author of "The Book of CSS3”) has effectively asked the same questions, and his answer is “The Modern Web”. Peter starts with the premise that the days of making a site for just the desktop are long gone. Any site that doesn’t consider mobile as an alternate platform (and truth be told, for many people, their only platform) they’re going to miss out on a lot of people. therefore, the multi platform ideal (device agnostic) is set up front and explanations of options available take that mobile-inclusive model into account. Each chapter looks at a broad array of possible options and available tools, and provides a survey of what they can do. Each chapter ends with a Further Reading section that will take you to a variety of sites and reference points to help you wrap your head around all of these details.

So what does “The Modern Web” have to say for itself?

Chapter 1 describes the Web Platform, sets the stage, and talks a bit about the realities that have led us to what I described in the opening paragraphs. It’s a primer for the ideas that will be covered in the rest of the book. Gasston encourages the idea of the "web platform” and that it contains all of the building blocks to be covered, including HTML5, CSS3 and JavaScript. It also encourages the user to keep up to date in the developments of browsers, what they are doing, what they are not doing, and what they have stopped doing. Gasston also says “test, test, and then test again”, which is a message I can wholeheartedly appreciate.

Chapter 2 is about Structure and Semantics,  or to put a finer point on it, the semantic differences available now to structure documents using HTML5. One of them has become a steady companion of late, and that’s Web Accessibility Initiatives Accessible Rich Internet Applications or WAI-ARIA (usually shortened to ARIA by yours truly). If you have ever wanted to understand Accessibility and the broader 508 standard, and what you an do to get a greater appreciation of what to do to enable this, ARIA tags are a must. The ability to segment the structure of documents based on content and platform means that we spend less time trying to shoehorn our sites into specific platforms, but rather make a ubiquitous platform that can be accessed depending on the device, and create the content to reside in that framework.

Chapter 3 talks about Device Responsive CSS, and at the heart of that is the ability to perform “media queries” what that means is, “tell me what device I am on, and I’ll tell you the best way to display the data.” This is a mostly theoretical chapter, showing what could happen with a variety of devices and leveraging options like Mobile first design. 

Chapter 4 discusses New Approaches to CSS Layouts, including how to set up multi column layouts, taking a look at the Flexbox tool, and the way it structures content, and leveraging the Grid layout so familiar to professional print publishing (defining what’s a space, where the space is, and how to allocate content to a particular space). 

Chapter 5 brings us to the current (as of the book writing) state of JavaScript, and that today’s JavaScript has exploded with available libraries (Burgess uses the term “Cambrian” to describe the proliferation and fragmentation of JavaScript libraries and capabilities). Libraries can be immensely useful, but be warned, they often come at a price, typically in the performance of your site or app. However, there is a benefit to having a lot of capabilities and features that can be referenced under one roof.

Chapter 6 covers device API’s that are now available to web developers thanks to HTML5, etc. Options such as Geolocation, utilizing Web storage, using utilities like drag and drop, accessing the devices camera and manipulating the images captured, connecting to external sites and apps, etc. Again, this is a broad survey, not a detailed breakdown. Explore the further reading if any of these items is interesting to you. 

Chapter 7 looks at Images and Graphics, specifically Scalable Vector Graphics (SVG) and the canvas option in HTML5. While JPEG’s, PNG’s and GIF’s are certainly still used, these newer techniques allow for the ability to draw vector and bitmap graphics dynamically. Each has their uses, along with some sample code snippets to demonstrate them in action.

Chapter 8 is dedicated to forms, more to the point, it is dedicated to the ways that forms can take advantage of the new HTML5 options to help drive rich web applications. A variety of new input options exist to leverage phone and tablet interfaces, where the input type (search box, URL, phone number, etc.) determines in advance what input options are needed and what to display to the user. The ability to auto-display choices to a user based on a data list is shown, as are a variety of input options, such as sliders for numerical values, spin-wheels for choosing dates, and other aspects familiar to mobile users can now be called by assigning their attributes to forms and applications. One of the nicer HTML5 options related to forms is that we can now create client side form validation, whereas before we needed to rely on secondary JavaScript, now it’s just part of the form field declarations (cool!).

Chapter 9 looks at how HTML5 handles multimedia directly using the audio and video tags, and the options to allow the user to display a variety of players, controls and options, as well as to utilize a variety of audio and video formats. Options like subtitles can be added, as well as captioned displayed at key points (think of those little pop-ups in YouTube, etc. yep, those). There are several formats, and of course, not all are compatible with all browsers, to the ability to pick and choose, or use a system’s default, adds to the robustness of the options (and also adds to the complexity of providing video and audio data native via the browser). 

Chapter 10 looks at the difference between a general web and mobile site, and the processes used to package a true “web app” that can be accessed and downloaded from a web marketplace like Google Store. In addition, options like Phonegap, which allows for a greater level of integration with a particular device, and AppCache, which lets a user store data on their device so they can user the app offline, get some coverage and examples.

Chapter 11 can be seen as an Epilogue to the book as a whole, in that it is a look to the future and some areas that are still baking, but may well become available in the not too distant future. Web Components, which allows for blocks to be reused and enhanced, while being in a protected space from standards CSS and JavaScript. CSS is also undergoing tome changes, with regions and exclusions allowing more customizable layout options. A lot of this is still in the works, but some of it is available now. Check the Further Reading sections to see what and how far along.

The book ends with two appendices. Appendix A covers Browser support for each of the sections in the book, while Appendix B is a gathering of chapter by chapter Further reading links and sources. 

Bottom Line:

The so called Modern Web is a miss mash of technologies, standards, practices and options that overlap and cover a lot of areas. There is a lot of detail crammed into this one book, and there’s a fair amount of tinkering to be done to see what works and how. Each section has a variety of examples and ways to see just what the page/site/app is doing. For the web developer who already has a handle on these technologies, this will be a good reference style book to examine and look for further details in the Further Reading (really, there’s a lot of “Further Reading that can be done!). 

The beginning Web Programmer may feel a bit lost in some of this, but with time, and practice with each option, it feels more comfortable. It’s not meant to be a HowTo book, but more of a survey course, with some specific examples spelled out here and there. I do think this book has a special niche that can benefit from it directly, and I’m lucky to be part of that group. Software Testers, if you’d like a book that covers a wide array of “futuristic” web tech, the positives and negatives, and the potential pitfalls that would be of great value to a software tester, this is a wonderful addition to your library. It’s certainly been a nice addition to mine :). 

TECHNICAL TESTER FRIDAY: In Praise of Virtual Machines, and Tweaking with PHP

It's Friday, one week in to this project, and as I mentioned in an  earlier comment, I reserve the right to go back and change my mind about any of the options I've worked with and what I've put together. Today, I am doing exactly that.

For those who read last week's entry, I sad it would be worth your time to install the needed software on your base machine, so that you could get a feel for each of the components and what it takes to do that. While I still think there's a vale to doing that, after having to uninstall, reinstall, unconfigure, reconfigure, modify, point somewhere else, change options again, and then notice that my hardware machines just don't quite line up the way I expect them to, I have decided to heed the call of so many who left their comments on my post from last week.

In this first block of stuff, I hereby wholeheartedly recommend that you set up a virtual machine to do this work. Set up several if you'd like, but your sanity will be preserved, and  have a few extra benefits:

- you can play what if with multiple machines if you choose
- if you decide to use a Linux virtual machine, your CPU, memory and disk footprint to run the VM is really small.
- applications like VirtualBox and VMWare server allow for saving states and for taking snapshots. It's sort of an on-the-cheap version control, and it can save you from shooting yourself in the foot. Much more so that using your base environment to do all of this.
- set up Dropbox or some other file share location, and you're golden, everything that matters gets placed in a spot where it can be accessed as you need it.

I had every intention of bringing VM's into the conversation at some point, but this past week made me decide now was the best time. If you want to do these exercises in both a hardware machine and a VM, you'll learn a lot. You'll learn a lot by sticking your tongue on an icy pole in the dead of winter, too. I'll leave it as an exercise to the reader to decide if some learning is best done vicariously. In any event, if you've taken the VM route with this, smart move, you won't regret it.

Next step is to set up a site with PHP. That would be great if I knew enough PHP to set up a site. Today, I can say I almost know enough to do that. PHP insertion is super easy. It's just a tag, in this case " to close, and  and some basic code that would look very familiar to anyone who has ever written a sample program in C or another language anything between the tags is PHP doing the work.

//dot (".") is the concatenation element (like + in JavaScript)
echo "Say" . " something" . " witty!";
$teabags = 0;
if ($teabags > 0) {
  echo "There are $teabags tea bags! I'll have a cup!";
} else {
  echo "No more tea! I guess I won't have a cup.";

If you create the front end page to evaluate PHP, it will happily do so. Note, you will need to save your pages with a .php extension, instead of .html, to do this. If there's other ways to do this, be patient with me, I haven't gotten that far yet ;).

There's a lot of different resources for PHP available, and one of the quickest to play with and try out is the Codecademy course. They cover the basics of the language, as well as how to put the snippets of PHP into a web page.  Another quick tutorial for building basic site elements can be seen at W3Schools, that perennial old school favorite of web arcana, and the PHP site itself likewise has a fairly quick overview of how to make pages with PHP. Right now I'm playing around with a few elements to see what I can do to create some dynamic content and reference things like images and navigation elements. 

Noah says to allow yourself a couple of weeks to get familiar with the language elements and practice making some simple pages. I'm going to reiterate that advice. It's really tempting to get greedy quick, and want to do too much or try top accomplish too many things at the same time. Noah's lesson 2 focuses on finessing HTML, CSS and JavaScript, so save those for lesson 2. Emphasize the focus in the short term on learning PHP basics, and trying to incorporate a whole bunch of the options into a few web pages. 

Also, save these snippets in a side file with some visible explanations, either in the HTML source or on the page itself. Why? Because this can be a running note tab to remind you of things that work and why. Make a tutorial page for PHP, using mostly PHP. Get a little meta. Put the page in a place you can readily access and review. Right now, I'm doing a hit of copy/paste to add elements and examples (oh for shame! I know, I'm breaking the Zed rule #1 here). Focus on understanding what you are doing first, *then* go back and see if there are ways to save these to include files to make more efficient and refactor. Yes, I'm saying incur a little technical debt right here. That's OK, it'll make the refactoring portion of this project a little more interesting ;).

My goal for this extended weekend and into next week is to get a mockup of a site, not just a page, that includes these PHP aspects, and start playing with them. Now of course, comes the tough part... how to make a site that will interest me enough to engage, but not be so detailed as to cause me to climb down too many rat holes. Stay tuned for next week's edition of TECHNICAL TESTER FRIDAY to see how well I did ;).

Thursday, February 13, 2014

What Force Would it take to Shatter an Ice Cube?

The question above was prompted by a Skype Coaching session I held a couple of days ago. A new tester contacted me and asked me how they might get started with software testing, and what should they do first.

Almost immediately the person who contacted me (I haven't asked their permission to share all the details, or their name, so I won't) asked me questions about automation tools, and what programming languages they should know and work on. I asked them to stop, and I shared my philosophy with them about testing, which should have as little to do with programming as possible. Don't get me wrong, programming is a perfectly wonderful skill, and I dabble in it at times, but I think we do a dis-service to those who want to be testers when we put at the primary requirement "must be a programmer with these skills and history". What I want to know is "how does a person think? What kind of avenues do they follow? Are the random, or do they hand things on a structure that others can examine, review, and comment/critique?"

It was at this point that I decided to ask about something I believe every tester should know, and that's "what do you know about the scientific method?" Yes, for those who have followed me for awhile, you already know that that is one of the topics we are developing for SummerQAmp, but one of the things I wondered was "how could someone actually demonstrate they understand this?" To that end, the question popped into my head, and I figured I might as well run with it.

"What Force Would it take to Shatter an Ice Cube?"

It's a random question, and that's what I wanted, something random and mostly removed from software testing. It's often too difficult to step in and think about making a science experiment out of software, because it feels so intangible. Physical objects, though, are perfect for these though experiments, so I figured we'd use an ice cube as a starting point for the conversation.

What Force Would it take to Shatter an Ice Cube?

How would someone answer that?
They could make a guess.
They could throw an ice cube at a wall and say "that much force"... and they'd be right.... sort of ;).

But if we really wanted to know, at what point, with actual data, could I determine where an ice cube shatters...

We'd probably set up an experiment. right?
What would we want our experiment to tell us?
Could we determine the point where it happens?
How would we do so?
What would we need to make the experiment happen?
How would we make measurements? Do we actually need measurements?
What will our feedback be? What will tell how we are doing?
How are we gathering data?
How would we explain our results?
Can we "defend" our methodology?
What if new information came our way; could we account for that?
Would we need to repeat or redesign the experiment?

I explained that the process of thinking this through, or even setting up an experiment and actually doing it, would inform them more about their own understanding and curiosity of things than any primer on testing I could give them.

I remember last year discussing this for the first time with James Pulley,and how he said that the Scientific Method has to be Lesson Zero for any potential software tester. The more I consider this, the more I agree. Time will tell if the person who started this conversation follows up, but I think if they actually do this, and really think about this process in depth, as well as do some reading on the scientific Method and why it matters (and that it's not helpful for everything), then we'll be quite a ways down the road to understanding some of the foundations of what it takes to even start a conversation on what makes a good tester.

Agree? Disagree? Better experiments to suggest? I'm all ears :).

Wednesday, February 12, 2014

Book Review: Perl One-Liners

Remember when I said I was going to be on a plane for a combined total of 12 hours, and I was going to use it to work through a bunch of books? This is a continuation of that airline readathon. NoStarch has been really generous and given me a bunch of books to review, so in case you are wondering why there are so many NoStarch titles in a row, well, now you know why :). 

There’s a certain cachet that comes with being able to hack up Linux, Darwin or UNIX boxes. Being able to write scripts is immensely helpful, but there’s always that knowing glance, that little nod, that holdover from the days of “Name That Tune”, where instead of saying “I can name that tune in one note”, the command line geek smiles and says "I can take care of that task with one line”.

Granted, those “one-liners" are often rather involved. Lots of pipes and tees and redirects, to be sure, but somehow, they can actually be said to be “one line fixes” or “one line scripts”. I currently work in an environment where Perl is still in active rotation. I used to write CGI programs in Perl once upon a time (and still maintain some of them to this day). I’ve always appreciated the ability to do things in one line, and in many ways, it’s a neat way to learn some of the more oddball syntax options of both the shell and a given utility, and actually put them into use. All this is my building up to the fact that, when I saw the listing for Peteris Krumins “Perl One-Liners: 130 Programs that Get Things Done”, it just begged for me to say “Oh please, let me review this!” 

The back cover make the following claim: 

Save time and sharpen your coding skills as you learn to conquer those pesky tasks in a few precisely placed keystrokes with Perl One-Liners.

So how does it stack up?

Chapter 1 is meant to help orient the reader towards the idea of a one-line fix or utility. Fact is, a lot of what we do are one-offs, or are tasks that we may need to do one time, but for dozens or even hundred of files. Some knowledge of Perl is  helpful, but many of the commands can just be typed in as is, examine the changes made, and work backwards. Windows users need to do a little tweaking to some of the commands, and Appendix B is here to help you do exactly that. Also, if you see examples that make you want to scratch your head (trust me, you will), there’s always perldoc, which will explain those areas you struggle with.

Chapter 2 looks at spacing, or more to the point, giving you control over just how much you want or can see. Each example explains the steps and the actions each command will perform. Various command line options like -e allow the user to iterate a while loop on every line in a file, so passing a variable parameter sets up an entire while loop. Lots of fascinating variations, each one doing something interesting, some easy to understand, and some bring cryptic to a new level ("perl -00pe0” anyone? Yes, Peteris explains it, and yes, it is pretty cool).

Chapter 3  covers Numbering, and a variety of quick methods to play and tweak with numbering lines and words. The “$." special variable gets covered (thinking it might have something to do with the line number of whatever you are interacting with? You’d be right!), as well as a game called "Perl golfing", which I guess is the Perl equivalent of my aforementioned “name that tune”.

Chapter 4 deal with Calculations. Want to do on the fly counting? Want to shuffle elements? Figure out if numbers are prime or non-prime? Determine a date based on an input? This chapter’s got you covered. 

Chapter 5 focuses on Arrays and Strings. Want to create your open password generator? Figure out ranges? Determine what a decimal value is in Hex, and vice versa? Oooh, I know, how about creating strings of various characteristics to use for text input (come on testers, you were all thinking it ;) )? Maybe generating your own personal offset code to semi-encrypt your messages sounds like fun. There’s plenty here to help you do that. Of course, if that last one seems interesting, the next chapter offers some better options ;).

Chapter 6 deals with Text Conversion and Substitution. Apply base64 encoding and decoding (OK, much more effective, but maybe not quite as fun as making your own raw version), creating HTML or deconstructing HTML, creating or breaking up URLs, and a bunch of operators that allow for all sorts of interesting string manipulations.

Chapter 7 covers Selectively Printing and Deleting Lines. Typically, a one liner that can selectively print can be tweaked to selectively remove, and vice versa. Look for and print lines that repeat, or those that match a particular pattern (and of course, apply similar rules to remove lines that meet your criteria as well).

Chapter 8 brings us to Useful Regular Expressions, and Perl has a monster of a regular expression engine. Rather than give an exhaustive rundown, Peteris focuses on regex patterns that we might use, well, regularly. Finding and matching IP addresses and subnet masks, parsing HTTP headers, validating email addresses, extracting and changing values and many others. This chapter alone is worth the price of purchase.

The book ends with three Appendices. Appendix A gives a listing of Perl’s Special Variables, and how to use them in their context (with more one liner examples). Appendix B is all about Perl One-Liners on Windows, including setting up Perl, getting Bash on Windows, running the one liners from the command prompt or PowerShell, and the oddities that makes Windows, well, Windows, and how to leverage the book to work in that ever so fascinating environment. Appendix C is basically a print out of all the Perl one-liners in the book. It can be downloaded and examined as a quick “how do I do that again?” from one file.

Bottom Line:

This is a tinkerer’s dream. What’s more, its a book that you can grab at any old time and play with for the fun of it, and yes, this book is FUN. You may be an old hand at Perl, you may be a novice, or maybe you’ve never touched a line of Perl code in your life. If you want to accomplish some irksome tasks, it’s a good bet there’s something in here that will help you do what you need to. If the exact match isn’t to be found, it takes very few jumps to get to something you can use and work with, novice and guru alike. You might think it would be ridiculous to say to yourself “hey, I have a few minutes to spare, I think I’ll try out some Perl One-Liners for fun”. Maybe I’m weird, but that’s exactly what I find myself doing. It’s the coders equivalent to a Facebook game, but be warned, a few minutes can become a couple of hours if you’re not careful. Yes, that’s hyperbole, and at the same time, it isn’t. Perl One-Liners is seriously fun, and a wonderful “tech book” for the short attention span readers among us. You may find yourself turning to it again and again. My advice, just roll with it :).

Fix You Estimating Bad Habits: An Evening with Ted M Young and #BAST

Tonight in San Francisco, BAST (Bay Area Software Testers) will be hosting our first Meetup for 2014. For those who will be there, I'll be excited to see you there. For those who aren't able to make it, please follow this post in the coming hours, as I will be live blogging what I hear and do.

BAST (Bay Area Software Testers) has made the commitment to try to bring in topics that go beyond or outside of the common or typical tester MeetUp topics. To that end, we are hosting Ted Young (Twitter: @jitterted) who will be talking about  how to "Fix Your Estimating Bad Habits".

For those planning on attending, here's the details:

Date: Wednesday, February 12, 2014
Time: 6:15 p.m. to 8:30 p.m.
Address: 50 Fremont St, 28th Floor, San Francisco, CA (map)

From our Meetup site description:

Ted is a self-styled Renaissance Coder who has been coding since 1979 and sold his first commercial program at the age of 14.  Ted is currently the chief practitioner and evangelist of lean and systems thinking for GuideWire software.  He works nicely with our mission statement to present you with outstanding thinkers and practitioners from all across the spectrum of software development and business.

I hope you'll be able to join us at the facilities of our gracious host, SalesForce. Food (pizza) and beer will be provided! Also, we will be giving away two books by O'Reilly Media, who has graciously started supporting us and is giving us books that we can give away. Tonight's titles are "Vagrant: Up and Running" and "LEAN UX". Must be present to win :)!

For the rest, you're just going to have to wait until I get there.

Ted started off the talk tonight describing some of the environments where he has worked and some of the challenges he's faced. He's asked up to use the slides as a token for conversation, and not to treat this as a "presentation' per se. This is meant to be a conversation, not Ted talking, which means this is likely to be even more fun than normal.

Think about the things you want to do, what are some options you'd like to try? Ted mentioned that he'd love to do TDD in his organization, but it's not practical with their environment as it currently exists. There are trade-offs in all things, and one of the tradeoff's he's had to deal with is that the nature of their tests. The real world is full of constraints, and we often have to deal with those constraints.

Ted through out a provocative question... what is an estimate? It took awhile for people to reply to this. A best guess based on information that we have (aka "bull---t"). SWAG, or "silly wild ass guess". A combination everything you need to do, paired with everything you've already done and know, and figuring out how to harmonize the two. Any wonder why we (collectively) are so bad at this;)?

How about bugs? Have you ever estimated how long a bug will take to fix? Of course, it depends on the bug. A mis-label of an element might take five minutes. A (seemingly) random performance drop under load might take days to figure out, maybe more, not even counting the time it might take to actually fix it. Some bugs seem obvious after the fact, but getting to "obvious" might pas our programmer and tester through days of hair pulling and mega frustration.

Ted refers to bugs, infrastructure, and other such items as "overhead", and while it's important to know how long it takes to take care of these things, trying to score points is counter productive. Ted isn't saying "don't rack the time it takes to do tasks" Those items are important, they take up time to do real feature coding (and feature testing, too).

Another bad habit is to "ignore all previous or other projects". Why do we save all of our spreadsheets, story point data, and other details if we have no intention of learning from it? The reason is that history proves that time and energy just seek a status quo. We record everything because we once intended to learn from what we found, but over time, we just end up recording everything because we've always recorded everything. Mining the data to make some projections of future work may not be entirely relevant, but it's not worthless either. Has anyone ever really delivered a project early with all scheduled features? Typically, projects come in over time, over budget and with jettisoned features. Examining previous projects to see trends allow us to counteract biases, if we are willing to pay attention. This approach is called Reference Cast Forecasting, and while it may not be very glamorous, it can give amazing guidance. Sadly, companies that are good at doing Reference Cast Forecasting are rare.

Another challenge we face is that we try to sample data without a clear reference of what we are sampling or even why we are sampling it. Think of audio. The lower the sample rate, the lower quality toe audio. The higher the sample rate, the better fidelity the audio you capture is. Sampling of data and history (think velocity of projects) could be very high fidelity or very low fidelity. Most of the time, we just don't know what we are really measuring. We are also terrible at statistics, on the whole. We tend to focus on perceived catastrophic instances, where the likelihood of that event happening was so incredibly low, winning the PowerBall lottery was more likely. Meanwhile, real and genuine dancers, much more prevalent, were not given the attention they deserved, especially considering their likelihood of occurring was significantly higher.

Another mistake we make is to believe that velocity (or defect rate, scope creep, expertise, skill, etc.) is linear. It's not. there comes a time where technical debt or other issues will slow down the process, or speed up one area to the mis-balance and detriment of another.

Ted talked a bit about the flaw of averages, and the danger of always taking the mean and adding it up. Granted, the mean may be correct 68% of the time, but that also means that 32% of the time we are wrong, or off the mark, or potentially *way* off the mark. That 32% chance is one in three. That's a lot of potential for getting it wrong, and all it takes is one unknown variable to totally throw us off. An unknown could, potentially, help us move unexpectedly faster, but generally speaking, unknowns tend to work negatively to our expectations. The mean may be a "3", but one out of three times, it will be a six, or an eight.

Another big danger is "attribute substitution". We don't know the answer to A< but we know B very well, and we kind of thing B is a lot like A, so we'll provide B as support for why A will take as long as it will. It's a dangerous substitution, and people do it all the time, most of the time not realizing it. It's the analogy stretched too far. Analogies can be helpful but analogies are never perfect fits. There's a real danger on relying on them too much. Measuring everything the same way is also problematic. Epics, themes, stories, tasks. Measuring them all the same way, there be dragons!

 Another danger we face is that we spend a lot of time on things that have little value. Why? Because it's much easier. Ted mentioned the idea of the cost of delay, where if I don't have a capability as a certain point in time, there is a cost, and I might be able to calculate that cost. Very often the hard stuff gets pushed back, and it's the hard stuff that's most at danger of becoming the victim to the cost of delay.

Ted suggests that we tend to underestimate our own completion time, but we are much better at estimating other people's efforts. Is it wishful thinking on our part, or are we just better observers of others and their capabilities over time? Perhaps taking our over estimation of another and added to our underestimation of ourselves might bring us to the sweet spot to how long something will really take. It's an interesting idea.

How many of us tent to only estimate our actual "touch time", meaning our in the zone, fully focus on what we are doing time? We tend to think way too much of how we allocate our time and attention. we forget all of the things that we may need to do, or just plain want to do. Do we ned to talk with other stakeholders? Do we need to coordinate with other members of the team? Those external dependencies can have a major impact on how much time and attention we can actually apply to an issue.

Another danger we face is that we tend to associate points and sizes to time in isolation. We completely neglect the actual complexity of the effort. we don't want to think of just the idea that a point equals a perfect engineering day. the whole idea of points was that we would abstract away the time factor, and that a difficult task or simple tasks would still have point value, but the points would be variable. Unfortunately, we've pegged points to time, so changing that attitude may be a lost cause. Additionally using things like Fibonacci numbers or planning poker, aspects that point to larger numbers, tends to increase the odds of under-estimation. The big numbers scare people. It's more comfortable to go with the values in the mean, an effect referred to as Anchoring Bias. We tend to anchor to what other people say, because they are close together. we don't want to be the outlier, even if we have a deep suspicion our pessimism may be well warranted. Our attempts to remove anchoring bias instead puts social anxiety into the mix. In short, we go with what feels safe. Remember, safe has a one in three chance to be wrong.

Another real danger is premature estimation. Have you explored the options before you commit? This can happen when someone tells us what to implement, rather than describing the problem and looking for the WHY of the problem. Exploring the WHY lets us see other potential avenues, whereas committing to a WHAT may mean committing to a course of action that is overkill for what is really needed to solve the actual problem. Sometimes we just have to experiment. We may not be able to come to an obvious course of action. There may be two competing ideas and both may look great on the surface. Sometimes we may just have to set up a few experiments to see what might be the best approach.

This was a deep talk, and there were some interesting aspects I hadn't considered. Some of these items are really familiar, and some are subconscious; there, but under the radar. Lots of food for thought to say the least :).

Our thanks to Salesforce for providing the venue for tonight, our thanks to Ted Young for speaking, and thanks to O'Reilly Media for providing free books for our Meetup. We would of course love to have more to give away, if you feel so inclined ;).

By the way, just want to point out, as the picture indicates, the spelling of "Fix You Estimate Bad Habits" was originally a typo, but became part of the title because it underlined the whole point of the topic :).

Book Review: Ruby Under a Microscope

Let's start with a disclaimer. I'm not a Computer Science major, nor did I complete a Computer Science course of study in school. I'm a software tester and one that finds themselves frequently using programming languages of various stripes for various purposes. Ruby is one of the most popular languages in current use, and for many, it's a language that allows them to learn some basic terms, some programming constructs, and then lets them just use it. It's clean, it's elegant, it's almost simple. It's a language that invites the user to just go with it.

For some, though, there's that sense of curiosity... what is my Ruby program really doing? How can I see what the system is actually doing with my code? What's going on underneath the hood? If such explorations interest you, then "Ruby Under a Microscope" by Pat Shaughnessy tackles that subject handily.

A word of warning going in. This is not a general language book. You will not learn much about programming in Ruby here. You should have a decent understanding of what Ruby syntax looks like and how it works. Having said that, you don't need to have years of experience with Ruby to appreciate this book for what it does. It takes some key areas of the language, and through examples, some small programs, and a variety of tools, lets you see and understand what Ruby actually looks like up close and personal.

Chapter 1 focuses on how Ruby understands the text that you type into your Ruby program. Ruby converts your source code first into tokens, and then converts that input stream into an "abstract syntax tree”. Through tools like “Ripper”, you can see this process and watch it in something resembling natural language (well, kind of. It’s still debug code, but it’s a lot less intimidating than one might think).

Chapter 2 covers how Ruby compiles code. Wait, isn’t Ruby a “scripting language, no compiler required? With 1.8 and earlier, yes, but with 1.9 and up, Ruby is compiled just like many other languages we are familiar with. The difference? Ruby does it automatically. You never need to invoke the compiler. Ruby also has its own “virtual machine” (YARV, or "Yet Another Ruby Virtual Machine) that it compiles its bytecode for. Ultimately, the byte code for YARV is what we witness running.

Chapter 3 goes into greater detail about how YARV runs our code. By comparing the steps necessary to run a simple program, we can compare the time it takes to run a program in Ruby 1.8 (which doesn’t have a compile sequence, it just runs) and Ruby 1.9 and 2.0, which do have compile sequences. For simple and brief interactions, it actually looks like Ruby 1.8 performs better, but for longer runs with more iterations, 1.9 and 2.0 have a huge advantage over 1.8 by virtue of its compile cycle. 

Chapter 4 focuses more attention on the virtual machine and how control structures and methods are handled within the YARV. If statements, for loops and calls to various methods demonstrate how ruby breaks down the instructions, as well as how it utilizes its internal calls to “jump” from one piece of code to another.  Ruby categorizes methods into 11 types, and labels its built-in methods as CFUNC methods, meaning they are implemented in C. Ruby also uses a hash to keep track of the number of arguments, her labels and what their default values should be.

Chapter 5 looks at objects and classes, specifically, Ruby’s internal objects and classes. Each Ruby object is, ultimately,  a class pointer paired with an array of instance variables, and everything in Ruby is an object. Several generic objects are shown, along with their C structures (RString, RArray, RRegexp, etc.), and demonstrate how they, likewise, are also simple combinations of a class pointer and instance variables. Classes are a little more involved. Each Ruby class can be defined as Ruby object (with its class pointer & instance variables) plus method definitions, attribute names, a constants table, and a “superclass” pointer.

Chapter 6 brings us deeper into methods and constants, specifically how these aspects are found and represented. Ruby lets a programmer look at programs with two contrasting paradigms. Our code can be organized through classes and superclasses, or it can be organized through "lexical scope". Which approach makes the most sense? it depends on what you want to have your program accomplish.

Chapter 7 gets into one of the key attributes of Ruby internals, the Hash Table. These are interesting data structures that allow a program to return values quickly, and to automatically increase in size as more elements are added. the chapter takes a deep dive into Ruby’s hash function and how it allows for elements to be accessed quickly.

 Chapter 8 covers blocks, and how the Blocks concept in Ruby borrows from the “closure” idea first prosed in the Lisp language several decades back. Blocks can be defined as “a combination of a function and an environment to use when calling that function”.  Using “lambda”,  a block can become a data value that can be passed, saved, and reused.

 Chapter 9 discusses Metaprogramming, a means to program in a way that code can inspect and change itself, dynamically. In other words, by referencing itself, your program can change itself! I’ll admit, this is one of the aspects of Ruby (or any language) that I have trouble getting my head around, and while I won’t claim to have mastery of these ideas after this chapter, I feel I have a little better feel for what’s happening.

 Chapter 10 takes us into the Java realm and shows us Ruby implemented in Java, as opposed to how we’ve been interacting with it thus far in C. The flow is similar, but each Ruby script gets compiled into a Java executable, and then is physically run by the Java Virtual Machine. We see how “Jay” parses the lines of code (much the way Bison does for MRI). By monitoring Java’s Just In time Compiler, we can see which class and structures are called whenever we create a script and run it. We can also see where, by focusing on various “hot spots in our program and compiling them into Java, we can save time in key areas compared to C implemented MRI.

Chapter 11 introduces Rubinious, a version of Ruby implemented with Ruby.  Well, it’s actually a virtual machine using C++ to run  Ruby code. What makes it different is that, rather than relying on C or Java Structures for the built-in classes, Rubinious does it with Ruby code. What does this mean? we can see how Ruby works internally without having to know C or Java. It’s all done in Ruby, and we can see how by reading the source code.

 Chapter 12 explores Ruby’s Garbage Collection, and how it differs, and is similar, in MRI, JRuby and Rubinious. Garbage collection helps us with three processes;  allocating memory for use by new objects,  identifying which objects a program is no longer using and reclaiming memory from unused objects. Various programs and examples demonstrate which objects are mapped where, and how to see when they are deallocated and their memory freed. Various algorithms for the various virtual machines are explored, but this is just a cursory overview of all the options and how they are implemented. Still, it’s an interesting view into a process that many of us take for granted because Ruby and many other languages basically let us take them for granted. 

Bottom Line:

"Ruby Under a Microscope" does something fairly ambitious. It attempts to write a system internals book in a language that non-computer scientists can readily understand. While there are numerous code snippets and examples to try and examine, the ability to look at the various Ruby internals and systems and see how they fit together can be accomplished by someone with general skills and basic familiarity with programming at the script level (which for many of us is as far as we typically get). An old saying says you can’t tell where you are going if you don’t know where you’ve been. Similarly, we can’t expect to get the most out of languages like Ruby without having a more clear idea what’s happening under the hood. It’s entirely possible to work with Ruby and never learn some of this stuff, but having a guide like "Ruby Under a Microscope” opens up a variety of avenues, and does so in a way that will make the journey interesting and, dare I say it, even a little fun.

Tuesday, February 11, 2014

The Thrill of the Chase

This past weekend, two of my kids and I decided to venture down to Harry's Hofbrau in San Jose. Why, you might ask? Well, other than the fact that it's a pretty decent restaurant with good food, it's also where the Pacific Coast Cichlid Association (PCCA) has its monthly meetings. Yes, I know some of you are scratching your heads, seeing I'm talking about "fish" again... what does this have to do with software testing? Be patient, I'm getting to that ;).

The PCCA has been around for years, and I've heard about them peripherally for about as long as I've been keeping fish. However, over the years, I entered a period of complacency. I had a thriving breeding colony, and had to extend my fish out into several tanks in the house to give them some space. I was doing fine, I knew what I was doing, and I hadn't had to buy fish in years. What was a group like PCCA going to do for me?

Well, as many of you know, I decided I wanted to give away the fish that had been the core of my breeding stock for several years, and that made it possible for me to add new fish for the first time in, well, close to a decade. I also elaborated on how doing that introduced a strain of ich into my tank that killed everything. From thriving to ruined in just a few weeks.

I decided to rebuild, only this time, I decided it was an opportunity to do things very differently. Since none of the fish that were in my systems survived, it meant starting totally anew, and that meant I could consider a totally different biotype if I wanted to. I set my son up with a twenty gallon tank exclusively with Lake Malawi cichlids, as well as a Royal Pleco because it looks cool. I built a 20 gallon quarantine tank in my half bathroom upstairs (seemed a logical place to put it ;) ), and set up once again my show tank (65 gallons) in the hope that I could reinvigorate the population. As I was doing this, I realized, hey, the PCCA is having its monthly meeting at Harry's Hofbrau on Saturday... maybe me and the kids should head down and check it out?"

One meeting later, that was surprisingly like a traditional Meetup, only with a fish auction at the end (of which we took home three new fish) we were card carrying members of the PCCA, and are already talking with and communicating with other members on next steps.

I promised this had something to do with testing, so I thank you for your patience. Very often, test practitioners go through their careers without even realizing that there's a community of other testers out there. They fall into the role, they read a little bit, they do some work and it meets a need. Thus, they feel like they have things covered. They don't really need to be disturbed, or so they think. Typically, this state of affairs lasts until something catastrophic happens, or barring catastrophe, something that really brings into question your supposed expertise and skill.  Some react defensively, some reflect, and some decide that maybe they need some help.

It's at this point where community becomes critical, and encouraging others to get engaged, to participate in the "thrill of the chase" to find what they need, comes into play. We often want to encourage other testers to join us. Heck, we scream  for it! The problem is, our screaming won't register with those who have no desire to hunt. It won't matter to those who don't have a sense of the thrill of the chase, and the fact is, those of us who heed that call do so because we've decided we need something more. We tend to equate the idea that the really good testers are the ones that are engaged in the broader community, but is that because we are good at what we do, or is it because we are the type that will engage, that that helps us become better than we would if we didn't engage with the broader community?

This past weekend reminded me that most of us fall on a continuum of expertise and understanding. Many of us feel that we know enough to go it alone. Many of us find value in community engagement. We look to see if we can engage others and come with us, but as my fish keeping history has reminded me, we seek when we are ready to seek, and we get help or reach out to community when we reach a point where that is what we want to do. Perhaps the better approach is not to put out what we are and what we can do, but to let them discover us, and if they are inclined enough to chase after us, then let them :).

Friday, February 7, 2014


My challenge to myself, with the full blessing of Noah Sussman, was to take the ideas from his proposed book, specifically his table of contents and their suggestions, and put them into practice. 

As I have been looking over this process, I’ve realized that this is not a project that really makes sense to post as a single "chapter" kind of thing. For starters, some of the steps take days, or even weeks, to put into place. Some of the sections are dealing with areas I have no exposure to currently, or I have scattered exposure. Mostly, this is going to require a continued connection, and daily practice, to make happen.

It’s with that in mind that I have decided to approach this as my own case study. My goal is to post something every Friday, regardless of how much or how little content there may be. Some Fridays may have more than one post, and may cover other topics, but I am committing to a weekly post, in some form, that starts with “TECHNICAL TESTER FRIDAY: [Fill in the Blank]”.

This is the intro from Noah’s post about this project on his end, and I’m going to take it to heart:

This is the first draft of the table of contents of a book that I have been writing. It’s worth noting that this entire program can be worked through without spending a penny on proprietary software (with the optional exception of Charles). It also does not require a powerful computer — you could do all this on a five year old MacBook.

Where possible, I'm going to emphasize materials and sources that are freely available. I may reference a book here and there if it seems like it would be helpful (and there are some books listed in this journey, so expect that I will be reviewing the ones mentioned).

For grins, I'm doing this in two environments (and potentially more). The first is a MacBookPro, and the second is a Toshiba Satellite PC laptop. Why both? Because I think it makes sense to see how things will work in two significantly different environments, and what adaptations I might need to make. Also, while the MacBook has almost all of the software in place by default, the PC users will have to install everything individually. Thus, today's entry is a quick cheat sheet on getting set up for this "first chapter".

Use PHP to put up your own Web site. Spend a couple of weeks making it a reasonably decent looking site. It doesn’t matter what the site is or whether it has any transactional functionality. Just some pages that display photos and content is fine.

All right, let's get the first caveat out of the way... I have never set up a site to use PHP. Ever. GASP!!! Somehow, I've either managed to work with three-tier systems (old school stuff that used CGI::Perl), DotNetNuke frameworks that were all based around .NET, Rails sites, or some other established web framework. For some reason, PHP just never entered into the mix of my regular everyday interactions. It seems this project is going to change that.

So what does it take to set up a site to use PHP? For my purposes, it will require a web server, a database, and some pages with HTML, CSS and JavaScript for good measure. With that, that means we need a Web Server (Apache since I have some familiarity with it already), MySQL (because, why not?), and PHP (because that's what Noah's suggesting we use, of course ;) ). With the exception of Linux, this is the classic LAMP server structure that's been with us for nearly two decades now (granted, the LAMP I knew was based around "Perl", but "PHP" will work for us here ;) ).

First up, Installation:

You'll want to install Apache HTTP Server, MySQL, and PHP on your system. To get tips on doing that, go to:

Apache HTTP:

This is pretty straightforward. Download the latest MSI build for the fastest setup. Give it some basic parameters so that it can be recognized on the network. Regardless of your configuration, if you take the default options, the server will be accessible at http://localhost:80/. If you see "It works, well, it works ;).


Note that this will take you to Oracle and their Cloud Delivery section. You will need a user name and a password. Download the MSI for the quickest installation (of course, if you want to download the nightly builds to be more wild and crazy, that's a great exercise in and of itself ;) ). For my purposes, I am just installing the generic MySQL server.

Once it's installed, you can open the command line option from the Start Menu (Start: All Programs: MySQL: MySQL 5.6 Server: MySQL 5.6 Server Command Line Client).

If you see this, you're good to go:

This is the one part of this puzzle that doesn't come pre-installed on your Mac (versions of httpd and php are already installed and ready to use). 

There are a variety of ways to get MySQL for Mac, but if you are using Mavericks as your OS (like me), here's a bone simple way to install MySQL on your Mac (at your Terminal prompt, type):

bash <(curl -Ls

ETA: while this is indeed extremely simple, it's also a potential security nightmare. As mentioned in the comments below, and to which I agree, never just run a script blindly. The shortened link expands to the following URL, and there you can review the entire script:

You will need to have Administrator access to complete the installation. Default options will be fine.


For Windows, this is fairly simple. Just download the files, unzip them, put them in a folder (C:\PHP is as good a place as any), and then add C:\PHP to your %PATH%. With this in place, you can write scripts in PHP and the system will respond.

Woo Hoo!!!

Again, with the Mac, most of this is already in place. Open your Terminal window and take note of where everything is and what versions you have. For my purposes, I'm using these versions:

So there we have it. Now that I have an environment I can poke at locally, I can start playing with things under total local control. Additionally, I also have access to an external "commercial" site that will allow me to do many of the same things. More about that in future posts.

Yeah, I know, not very exciting, but it's a start, and that's what I've needed to do for awhile now, just get this started. Now I'm honor bound to give you all something more meaty by next week :).