Monday, March 30, 2015

Communications and Breakfast: An Amusing Cultural Moment

Last week, my family had the pleasure to host three girls from Narita, Japan, as part of our sister city cultural exchange program. We first took part in this exchange in 2013, when our older daughter was selected to be part of our city's delegation to go to Japan. Delegates families are encouraged to host the delegates coming from Japan, and we did that two years ago. We greatly enjoyed the experience, so when it came time for our youngest daughter to see if she could participate, we of course said yes, and thus, we welcomed three young women a long way from their homes into our home.

These girls did not know each other well prior to this trip. Unlike our city, which has one Intermediate school within its city limits, Narita has several schools represented, and the delegates chosen typically do not have hosting assignments with students from their school. Therefore, it was not just a learning experience for the girls to relate to us, but to each other.

On Friday, as I was getting ready to take my eldest daughter to her school, one of the girls came downstairs with some prepackaged single serving rice packets. She motioned to me in the English that she could that she wanted to cook the rice. I figured "OK, sure, I'd be happy to do that for you", and did so. After it was cooked, I opened the small container, put it in a bowl, and then asked her if she wanted a fork or chop sticks. She looked at me quizzically for a moment, and then she said "chop sticks". I handed her the chop sticks, and figured "OK, the girls want to have rice as part of their breakfast. No problem." I then lined up the other packets, explained to Christina how to cook them in the microwave (40 seconds seems fine with our power microwave), and then went to take my daughter to school. The rest of this story comes courtesy of texts and after the fact revelation ;).

While I was out and Christina was helping get the rest of the packets of rice prepared, she pulled out a few bowls and put the rice in the bowls, with a few more chop sticks and placed them on the table for the girls. The young lady who had come down to make the request then said "no, this is for you". Christina smiled, said thank you, and then started to eat the rice from one of the bowls. What she noticed after a few bits was that the girl was staring at her, frozen in place, and looking very concerned. At this point, Christina stopped, put down the bowl, and asked if everything was OK. Our Japanese exchange student tried to grasp for the words to explain what she wanted to say, and as this was happening, another of the exchange students, who was much more fluent in English, saw what was happening.

"Oh, this rice is for a breakfast dish we planned to make for you. Each of the packages has enough rice to make one Onigiri (which is to say, rice ball, a popular food item in Japan). At this, Christina realized what had happened, and texted me what she did. She felt mortified, but I assured her it was OK, and I'd happily split mine with her to make up for it. With that, we were able to work out the details of what they wanted and needed from us so that they could make the Onigiri for us (which they did, and which was delicious, I might add!).

I smiled a little bit at this because I have felt this situation a few times in my career, although it wasn't trying to communicate from English to Japanese and back. Instead, I've had moment like this where I've had to explain software testing concepts to programmers or others in the organization, and they have tried to explain their processes to me. It's very likely that I may have had more than a few moments of my own where I must have stood there, paralyzed and watching things happen, where I wanted to say "no, stop, don't do that, I need to explain more" but felt like the world was whizzing past me. As my wife explained the situation, i couldn't help but feel for both of them. Fortunately, in this case, all it meant was one fewer rice balls. In programming and testing, these mis-communications or mis-understandings are often where things can go ridiculously sideways, albeit usually not in an amusing way. The Larsen Onigiri Incident, I'm sure, will become a story of humor on both sides of the Pacific for the participants. It's a good reminder to make sure, up front, that I understand what others in my organization are thinking before we start doing.

Wednesday, March 25, 2015

Register NOW for #CAST2015 in Grand Rapids, Michigan



After some final tweaks, messages being sent out to the speakers and presenters, keynotes and other behind the scenes machinations... we are please to announce that registration for AST's tenth annual conference, CAST 2015, to be held August 3-5, 2015 in Grand Rapids, Michigan, is now open!

The pre-conference Tutorials have been published, and these tutorials are being offered by actual practitioners, thought leaders, and experts in testing. All four of them are worth attending, but alas, we have room for twenty-five attendees for each, so if you want to get in, well, now is the time :)!

The tutorials we are offering this year are:

"Speaking Truth to Power" – Fiona Charles

"Testing Fundamentals for Experienced Testers" – Robert Sabourin

"Mobile App Coverage Using Mind Maps" – Dhanasekar Subramaniam

"'Follow-your-nose' Testing – Questioning Rules and Overturning Convention" – Christin Wiedemann

Of course, there is also the two day conference by itself that you can register for, and yes, we wholeheartedly encourage you to register soon.

If you register by June 5th, you can save up to $400 with our "Early Bird" discount. We limit attendance to 250 attendees, and CAST sold out last year, so register soon to secure your seat.

Additionally, we will be offering webCAST again this year. Really want to attend, but you just can't make it those days? We will be live streaming keynotes, full sessions, and "CAST Live" again this year!
Come join us this summer for our tenth annual conference in downtown Grand Rapids at the beautiful Grand Plaza Hotel August 3-5, or join us online for webCAST. Together, let's start “Moving Testing Forward.”

Monday, March 23, 2015

More Automation Does Not Equal Repaired Testing

Every once in a while, I get an alert form a friend or a colleague to an article or blog post. These posts give me a chance to discuss a broader point that deserves to be made. One of those happened to arrive in my inbox a few day ago. The alert was to this blog post titled Testing is Broken by Philip Howard.

First things first; yes, I disagree with this post, but not for the reasons people might believe.

Testing is indeed "broken" in many places, but I don’t automatically go to the final analysis of this article, which is “it needs far more automation”. What it needs, in my opinion, is better quality automation, the kind that addresses the tedious and the mundane, so that testers can apply their brains to much more interesting challenges. The post talks about how there’s a conflict of interest. Recruiters and vendors are more interested in selling bodies, rather than selling solutions and tools. Therefore they push manual testing and manual testing tools over automated tools. From my vantage point, this has not been the case, but for the sake of discussion, I'll run with it ;).

Philip uses a definition of Manual Testing that I find inadequate. It’s not entirely his fault; he got it from Wikipedia. Manual testing as defined by Wikipedia is "the process of manually testing software for defects. It requires a tester to play the role of an end user and use most or all features of the application to ensure correct behavior. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases”.

That’s correct, in a sense, but it leaves out a lot. What it doesn’t mention is the fact that manual testing is a skill that requires active thought, consideration, and discernment, the kind that quantitative measures (of which automation is quite adept at handling) fall short with.

I answered back on a LinkedIn forum post related to this blog entry by making a comparison. I focused on two questions related to accessibility:

1. Can I identify that the wai-aria tag for the element being displayed has the defined string associated with it?

That’s an easy "yes" for automation, because we know what we are looking for. We're looking  for an attribute (a wai-aria tag) within a defined item (a stated element), and we know what we expect (a string like “Image Uploaded. Press Next to Continue”). Can this example benefit from improved automation? Absolutely!

2. Can I confirm that the experience for a sight-impaired user of a workflow is on par with the experience of a sighted user?

That’s impossible for automation to handle, at least with the tech we have today. Why? Because this isn’t quantifiable. It’s entirely subjective. To make the conclusion that it is either an acceptable or not acceptable workflow, a real live human has to address and test it. They must use their powers of discernment, and say either “yes, the experience is comparable” or “no, the experience is unacceptable”.

To reiterate, I do believe testing, and testers, needs some help. However, more automation for the sake of more automation isn’t going to do it. What’s needed is the ability to leverage the tools available, to help move off as much of the repetitive busywork we know we have to do every time, and get those quantifiable elements sequenced. Once we have a handle on that, then we have a fighting chance to look at the qualitative and interesting problems, the ones that we haven’t figured out how to quantify yet. Real human beings will be needed to take on those objectives, so don’t be so quick to be rid of the bodies just yet.

Friday, March 20, 2015

The Case for "Just a Little More" Documentation

The following comment was posted to Twitter yesterday by Aaron Hodder, and I felt compelled not just to retweet it, but to write a follow up about it.

The tweet:



"People that conflate being anti wasteful documentation with anti documentation frustrate me."

I think this is an important area for us to talk about. This is where the pendulum, I believe has swing too far in both directions during my career. I remember very well the 1990s, and having to use ISO-9001 standards for writing test plans. For those not familiar with this approach, the idea was that you had a functional specification, and that functional specification had an accompanying test plan. In most cases, that test plan mapped exactly to that functional specification. We often referred to this as "the sideways spec”. That was a joking term we used to basically say "we took the spec, and we added words like confirm or verify to each statement." If you think I'm kidding, I assure you I'm not. It wasn't exactly that way, but it was very close. I remember all too well writing a number of detailed test plans, trying to be as specific as possible, only to have it turned back to me with "it's not detailed enough." When I finally figured out what they really meant was "just copy the spec”, I dutifully followed. It made my employers happy, but it did not result in better testing. In fact, I think it's safe to say it resulted in worse testing, because we rarely followed what we had written. Qe did what we felt we had to do with the time that we had, and the document existed to cover our butts.

Fast forward now a couple of decades, and we are in the midst of the "Agile age”. In this Agile world, we believe in not providing lots of "needless documentation”. In many environments, this translates to "no documentation" or "no spec” outside of what appears on the Kanban board or Scrum tracker. A lot is left to the imagination. As a tester, this can be a good thing. It allows me the ability to open up different aspects, and go in directions that are not specifically scripted. That's the good part.

The bad part is, because there's sparse documentation, we don't necessarily explore all the potential avenues because we just don’t know what they are. Often, I create my own checklists and add ideas of where I looked, and in the past I’ve received replies like “You're putting too much in here, you don't need to do that, this is overkill." I think it's important for us to differentiate between not overthinking and over planning, and making sure that we are giving enough information and providing enough details to be successful. It's never going to be perfect. The idea is that we communicate, we talk, we don't just throw documents at each other.  We have to make sure that we have communicated enough, and that we really do understand what needs to happen.

I'm a fan of the “Three Amigos” model. Early in the story, preferably at the very beginning, three people come together; the product owner, the programmer and the tester. That is where these details can be discussed and hammered out. I do not expect a full spec or implementation at this point, but it is important that everybody that comes to this meeting share their concerns, considerations and questions. There's a good chance we still might miss a number of things, because we didn't take the time here to talk out what could happen. Is it possible to overdo it? Perhaps, if we're getting bogged down in minutia and details, but I don't think it is a bad thing to press for real questions such as “Where is this product going to be used? What are the permutations that we need to consider? What subsystems might this also affect?" If we don't have the answer right then and there, we still have the ability to say “Oh, yeah that's important. Let's make sure that we document that.”

There's no question, I prefer this more nimble and agile method of developing software to yesteryear. I would really rather not go back to what I had to do in the 90s. However, even in our trying to be lean, and in our quest for a minimal viable products, let’s be sure we are also communicating effectively about what our products need to be doing. My guess is, the overall quality of what comes out the other end will be much better for the effort.

Thursday, March 19, 2015

It's Not Your Imagination, It's REALLY Broken

One of the more interesting aspects about automation is the fact that it can free you up from doing a lot of repetitive work, and it can check your code to make sure something stupid has not happened. the vast majority of the time, the test runs go by with little fanfare. Perhaps there's an occasional change that causes tests to fail, and they need to be reworked to be in line with the present product. Sometimes there are more numerous failures, and those can often be chalked up to flaky tests or infrastructure woes.

However, there are those days where things look to go terribly wrong, where the information radiator is bleeding red. Your tests have caught something spectacular, something devastating. What I find ironic about this particular situation is not the fact that we jump for joy and say "wow, we sure dodged a bullet there, we found something catastrophic!" Instead, what we usually say is "let's do a debug of this session, because there's no way that this could be right. Nothing fails that spectacularly!"

I used to think the same thing, until I witnessed that very thing happen. It was a typical release cycle for us, stories being worked on as normal, work on a new feature tested, deemed to work as we hoped it would, with a reasonable enough quality to say "good enough to play with others". We merged the changes to the branch, and then we ran the tests. The report showed a failed run. I opened the full report and couldn't believe what I was seeing. More than 50% of the spun up machines were registering red. Did I at first think "whoah, someone must have put in a catastrophically bad piece of code!" No, my first reaction was to say "ugh, what went wrong with our servers?!" This is the danger we face when things just "kind of work" on their own for a long time. We are so used to little hiccups that we know what to do with them. We are totally unprepared when we are faced with a massive failure. In this case, I went through to check all of the failed states of the machines to look for either a network failure or a system failure... only none was to be found. I looked at the test failure statements expecting them to be obvious configuration issues, but they weren't. I took individual tests and ran them in real time and watched the console to see what happened. The screens looked like what we'd expect to see, but we were still failing tests.

After an hour and a half of digging, I had to face a weird fact... someone committed a change that fundamentally broke the application. Whoa! In this Continuous integration world I live in now, the fact is, that's not something you see every day. We gathered together to review the output, and as we looked over the details, one of the programmers said "oh, wow, I know what I did!" He then explained that he had made a change to the way that we fetched cached elements and that change was having a ripple effect on multiple sub systems. In short, it was a real and genuine issue, and it was so big an error that we were willing to disbelieve it before we were able to accept that, yep, this problem was totally real.

As a tester, sometimes I find myself getting tied up in minutiae, and minutiae becomes the modus operandi. We react when we expect. When we get delivered to us a major sinkhole in a program, we are more likely to not trust what we are seeing, because we believe such a thing is just not possible any longer. I'm here to tell you that it does happen, it happens more than I want to believe, or admit, and I really shouldn't be as surprised that it happens.

If I can make any recommendation to a tester out there, if you are faced with a catastrophic failure, take a little time to see if you can understand what it is,and what causes it. Do your due diligence, of course. Make sure that you re not wasting other people's time, but also realize that, yes, even in our ever so interconnected and streamlined world, it is still possible to introduce a small change that has a monumentally big impact. It's more common than you might think, and very often, really, it's not your imagination. You've hit something big. Move accordingly.

Tuesday, March 10, 2015

TESTHEAD Turns Five Today

With thanks to Tomasi Akimeta
for making me into "TESTHEAD" :)!!!
On March 10, 2010, I stepped forward with the boldest boast I had ever made up to that point. I started a blog about software testing, and in the process, I decided I would try to see if I could say something about the state of software testing, my role in it, and what I had learned through my career. Today, this blog turns five years old. In the USA, were this blog a person, we would say I'm just about done with pre-school, and in the fall, I'd be getting ready to go into Kindergarten ;).

So how has this blog changed over the past five years? For starters, it's been a wonderful learning ground for me. Notice how I said that: a learning ground for me. When I started it, I had intended it to be a teaching ground to others. Yeah, that didn't last long. I realized pretty quickly how little I actually knew, and how much I still had to learn (and am still learning). I've found it to be a great place to "learn in public" and to, in many ways, be a springboard for many opportunities. During its first two years, most of the writing that I did in any capacity showed up here. I could talk about anything I wanted to, so long as it loosely fit into a software testing narrative.

From there, I've been able to explore a bunch of different angles, try out initiatives, and write for other venues, the most recent being a permanent guest blog with IT Knowledge Exchange as one of the Uncharted Waters authors. While I have other places I am writing and posting articles, I still love the fact that I have this blog and that it still exists as an outlet for ideas that may be "not entirely ready for prime time", and I am appreciative of the fact that I have a readership that values that and allows me to experiment openly. Were it not for the many readers of this blog, along with their forwards, shares, retweets, plus-one's and mentions in their own posts, I wouldn't have near the readership I currently have, and I am indeed grateful to all of you who take the time to read what I write on TESTHEAD.

So what's the story behind the picture you see above? My friend Tomasi Akimeta offered to make me some fresh images that I could associate with my site. He asked me what my site represented to me, and how I hoped it would be remembered by others. I laughed and said that I hoped my site would be seen as a place where we could go beyond the expected when it comes to software testing. I'd like to champion thinking rather than received dogma, experimentation rather than following a path by rote, and champion the idea that there is intelligence that goes into testing. He laughed and said "so what you want to do is show that there's a thinking brain behind that crash test dummy?" He was referring to the favicon that's been part of the site for close to five years. I said "Yes, exactly!" He then came back a few weeks later and said "So, something like this?" After I had a good laugh and a smile at the ideas he had, I said "Yes, this is exactly what I would like to have represent this site; the emergence of the human and the brain behind the dummy!"

My thanks to Tomasi for helping me usher in my sixth year with style, and to celebrate the past five with reflection, amusement and a lot of gratitude for all those who regularly read this site. Here's to future days!

Friday, March 6, 2015

Taming Your E-Mail Dragon

Over on Uncharted Waters, I wrote a post about out of control E-mail titled "Is Your Killer App Killing You?" If that may be a bit too much hyperbole, there is no question that E-mail can be exhausting, demoralizing, and just really hard to manage.

One area that I think is really needed, and would make E-mail much more effective, is some way to extend messages to automatically start new processes. Some of this can be done at a fairly simple level. Most of the time, though, what ends up happening is that I get an email, or a string of emails, I copy the relevant details, and then I paste them somewhere else (calendar, a wiki, some document, Slack, a blog post, etc.). What is missing, and what I think would be extremely helpful, would be to have ways to register key applications with your email provider, whoever it may be, and then have key commands or right click options that would let you take that message, choose what you want to do with it, and then move to that next action.

Some examples... if you get a message and someone writes that they'd like to get together at 3:00 p.m., having the ability to right there schedule an appointment and lock the details of the message in place seems like it would be simple (note the choice of words, I said it seems it would be simple, I'm not saying it would be easy ;) ). If a message includes a dollar amount, it would be awesome to be able to right click or key command so that I could record the transaction in my financial software or create an invoice (either would be legitimate choices, I'd think).

Another option that I didn't mention in the original piece, but that I have found to be somewhat helpful, is to utilize tools that will allow you to aggregate messages that you can review later. For me, there are three levels of email detail that I find myself dealing with.

1. E-mail I genuinely could not care any less about, but doesn't rise to the level of outright SPAM.

I am unsentimental. Lots of stuff from sites I use regularly comes to my inbox and I genuinely do not want to see it. My general habit is to delete it without even opening it. If you find yourself doing this over and over again, just unsubscribe and be done with it. If the site in question doesn't give you a clear option for that, then make rules that will delete those messages so you don't have to. So far, I've yet to find myself saying "aww, man, I really wish I'd seen that offer that I missed, even though I deleted the previous two hundred that landed in my inbox. Cut them loose and free your mind. It's easy :).

2. Emails with a personal connection that matter enough for me to review and consider them, but I may well not actually do anything with them. Still much of the time, I probably will.

These are the messages I let drop into my inbox, usually to be subject to various filter rules and to get sorted into the buckets I want to deal with, but I want to see them and not let them sit around.

3. That stuff that falls between #1 and #2.

For these messages, I am currently using an app called Unroll.me. It's a pretty basic tool in that it creates a folder in my IMAP (called Unroll.Me), and any email that I have decided to "roll up" and look at later goes into this app, and this folder. There's some other features that the app offers, such as Unsubscribing (if the API of the service is set up to do that), include in the roll up, or leave in your Inbox. Each day, I get a message that tells me what has landed in my roll up, and I can review each of them at that point in time.

I will note that this is not a perfect solution. The Unsubscribe works quite well, and the push to Inbox also has no problems. It's the Roll up step that requires a slight change in thinking. If you have hundreds of messages each day landing into the roll up, IMO, you're doing it wrong. The problem with having the roll up collect too many messages is that it becomes easy to put off, or deal with another day, which causes the back log to grow ever larger, and in this case, out of sight definitely means out of mind. To get the best benefit, I'd suggest a daily read and a weekly manage, where you can decide which items should be unsubscribed, which should remain in the roll up, and which should just go straight to your inbox.

In any event, I know that E-mail can suck the joy out of a person, and frankly, that's just no way to live. If you find yourself buried in E-mail, check out the Uncharted Waters article, give Unroll.me a try, or better yet, sound off below with what you use to manage the beast that is out of control email. As I said in the original Uncharted Waters post, I am genuinely interested in ways to tame this monster, so let me know what you do.

Thursday, March 5, 2015

All or Nothing, or Why Ask Then?

This is a bit of a rant, and I apologize for people who are going to read this and wonder what I am getting on about. Since I try to tie everything to software testing at some point and in some way, hopefully, this will be worth your time.

I have a drug/convenience store near my place of work. I go here semi-regularly to pick up things that I need or just plain want. I'm cyclical when it comes to certain things, but one of my regular purchases is chewing gum. It helps me blunt hunger, and it helps me get into flow when I write, code or test. I also tend to pick up little things here and there because the store is less than 100 steps from my desk. As is often the case, I get certain deals. I also get asked to take surveys on their site. I do these from time to time because, hey, why not? Maybe my suggestions will help them.

Today, as I was walking out the door, I was given my receipt and the cashier posted the following on it.


Really, I get why they do this. If they can't score a five, then it doesn't matter. You weren't happy, end of story. Still, I can't help but look at this as a form of emotional blackmail. It screams "we need to have you give us a five for everything, so please score us a five!" Hey, if I can do so, I will, but what if you were just shy of a five? What if I was in a hurry, and there was just a few too many people in the store? The experience was a true four, but hey, it was still pretty great. Now that experience is going to count for nothing. Does this encourage me to give more fives? No. What it does is tell me "I no longer have any interest in giving feedback", because unless it is something that says "Yay, we're great!" then it's considered worthless. It's a way to collect kudos, and it discounts all other experiences.

As a software tester, I have often faced this attitude. We tend to be taught that bugs are absolute. If something isn't working right, then you need to file a bug. My question always comes down to "what does 'not working right' actually mean?" There are situations where the way a program behaves is not "perfect", but it's plenty "good enough" for what I need to do. Does it delight me in every way possible? No. Would a little tweak here and there be nice? Sure. By the logic above, either everything has to be 100% flawless (good luck with that), or the experience is fundamentally broken and a bug that needs to be addressed. The problem arises when we realize that "anything less than a five is a bug", and that means the vast majority of interactions with systems are bugs... does that make sense? Even if at a ridiculously overburdened fundamental level it is true, that means that the number of "bugs" in the system are so overwhelming that they will never get fixed. Additionally, if anything less than a five counts as zero, what faith do I have that areas I actually consider to be a one or a two, or even a zero, will actually be considered or addressed? The long term tester and cynic in me knows the answer; they won't be looked at.

To stores out there looking for honest feedback, begging for fives isn't going to get it. You will either get people who will post fives because they want to be nice, or they will avoid the survey entirely. Something tells me this is not the outcome you are after, if quality of experience is really what you want. Again, the cynic in me thinks this is just a way to put numbers to how many people feel you are awesome, and give little to no attention to the other 80% of responses. I hope I'm wrong.

-----

ETA: Michael Bolton points out below that I made a faulty assumption with my closing remark. It was meant as a quip, and not to be taken literally, but he's absolutely right. I anchored on the five, and it made me mentally consider an even distribution of the other four numbers. There absolutely is nothing that says that is the case, it's an assumption I jumped to specifically to make a point. Thanks for the comment, Michael :).

Wednesday, March 4, 2015

Book Review: Ruby Wizardry



Continuing with the “Humble Brainiac Book Bundle” that was offered by NoStarch Press and Humble Bundle, I am sharing books that my daughter and I are exploring as we take on a project of learning how to write code and test software. Part of that process has been to spend time in the "code and see" environment that is Codecademy. If you have done any part of the Ruby track on Codecademy, you are familiar with Eric Weinstein’s work, as he’s the one who wrote and does updates to the Ruby section (as well as the Python, JavaScript, HTML/CSS and PHP modules).

With "Ruby Wizardry", Eric takes his coding skills and puts them into the sphere of helping kids get excited about coding, and in particular, excited about coding in Ruby. Of all the languages my daughter and I are covering, Ruby is the one I’ve had the most familiarity with, as well as the most extended interaction, so I was excited to get into this book and see how it would measure up, both as a primer for kids and for adults. So how does Ruby Wizardry do on that front?

As you might guess from the title and the cover, this book is aimed squarely at kids, and it covers the topics by telling a story of two children in a magical land dealing with a King and a Court that is not entirely what it seems, and next to nothing seems to work. To remedy this, the children of the story have to learn some magic to help make the kingdom work properly. What magic, you ask? Why, the Ruby kind, of course :)!

Chapter 1 basically sets the stage and reminds us why we bought this book in the first place. It introduces us to the language by showing us how to get it ("all adults on deck!”), how to install it, make sure it’s running and write our first "Hello World” program. It also introduces us to irb, our command interpreter, and sets the stage for further creative bouts of genius.

Chapter 2 looks at "The King's Strings" and handles, well, strings. Strings have methods, which allow us to append words like length and reverse written with dots ("string".length or 'string'.reverse). Variables are also covered, and show that the syntax that works on raw strings also works on assigned variables as well.

Chapter 3 is all about Pipe Dreams, or put more typically, flow control, with a healthy dose of booleans, string interpolation, and the use of structures like if, elseif, else and unless to make our Ruby scripts act one way or another based on values in variables and how those values relate to other things (such as being equal, not equal, done along with other things or in spite of other things, etc.).

Chapter 4 introduces us to a monorail, which is a train that travels in an endless loop. Likewise, the chapter is about looping constructs and how we can construct loops (using while as long as something is true and until up to the point something becomes false, along with for to iterate over things like arrays) to do important processes and repeat them as many times as we want, or need. Also, it warns us to not write our code in such a way to make an infinite loop, one that will just keep running, like our monorail if it never stops.

Chapter 5 introduces us to the hash, which is a list with two attributes, a key and a value,  as well as the ability to add items to arrays with shift, unshift, push or pop. Additionally we also learned how to play with ranges and other methods that give us insights as to how elements in both arrays and hashes can be accessed and displayed.

Chapter 6 goes into talking about symbols (which is just another word for name), and the notation needed to access symbols. Generally, the content of something is a string; the name of something is a symbol. Utilizing methods allows us to change values, convert symbols to strings, and other options that allow us to manipulate data and variables.

Chapter 7 goes into how to define and create our own methods, including the use of splat (*) parameters, that allows us to use any number of arguments. We can also define methods that take blocks by using "yield".

Chapter 8 extends us into objects, and how we can create them and use them. We also learn about object IDs to tell them apart, as well as classes, which allow us to make objects with similar attributes. We also see how we can have variables with different levels of focus (Local, Global, Instance and Class) and how we can tell each apart (variable, $variable, @variable and @@variable, respectively).

Chapter 9 focuses on inheritance, which is how Ruby classes can share information with each other. this inheritance option allows us to create subclasses and child classes, and inherit attributes from parent/superclasses.

Chapter 10 shows us a horse of a different color, or in this case, modules, which are a bit like classes, but they can't be created using the new method. Modules are somewhat like storage containers so that we can organize code if methods, objects or classes won't do what we want to do.  By using include or extend, we can add the module to existing instances or classes.

Chapter 11 shows us that sometimes the second time's the charm, or more specifically, we start getting the hang of Refactoring. Using or operators to assign variables, using ternary operators for short actions, using case statements instead of multiple if/elseif/else statements, returning boolean variables, and most important, the removal of duplicate code and making methods small and manageable.

Chapter 12 shows us the nitty-gritty of dealing with files. Opening reading from, writing to, closing and deleting files are all critical if we want to actually save the work we do and the changes we make.

Chapter 13 encourages us to follow the WEBrick Road, or how we can use Ruby to read and write data from the Internet to files or back to Internet servers using the open-uri gem, which is a set of files that we can use to make writing programs easier (and there are lots of Ruby gems out there :) ).

Chapter 14 gives some hints as to where you, dear reader, might want to go next. This section includes a list of books, online tutorials, and podcasts, including one of my favorite, the Ruby Rogues Podcast. It also includes interactive resources such as Codecademy, Code School, and Ruby Koans, and a quick list of additional topics.

The book ends with two appendices that talk about how to install Ruby on a Mac or a Linux workstation and some of the issues you might encounter in the process.

Bottom Line:

For a "kids book", there's a lot of meat in here, and to get through all of the examples, you'll need to take some time and see how everything fits together. The story is engaging and provides the context needed for the examples to make sense, and each section provides a review so that you can really see if you get the ideas. While aimed for kids, adults would be well suited to follow along as well. Who knows, some wizarding kids might teach you a thing or three about Ruby, and frankly, that's not a bad deal at all!