Wednesday, August 13, 2014

The Conferring Continues at #CAST2014


Hello everyone! It's wild to think that my week in New York will be ending tomorrow morning. I've had so many great experiences, conversations and interactions with so many great people. I've met both of my PerScholas mentees, and I've enjoyed watching them take in this experience. It's also been great fun talking to so many new friends, and I will genuinely miss this "gathering of the tribe", but let's not lament leaving when there is a whole day of interaction and conferring (not to mention my own talk ;) ) still to happen.

Lean Coffee this morning dealt with some interesting challenges in that the water main broke out in front of the Marlton. As such, the water for most of 8th Street was shut off. Water only got restored a little after 7:00 a.m., so the coffee part of Lean Coffee for the participants is just starting to come in.

One of the topics we started with was the transient nature of software testing, and why we see so few people who come into testing stay with testing. There are lots of reasons for this. In my career, many of the people I have met who were software testers have gone on to do other things. Some became programmers, some became project managers, some became system administrators, and some became managers. Of the people I knew who were testers for more than five years, most of them remained testers going forward. that's just my reaction, but I think that, by the time people have reached the five year mark as testers, they decided that they either liked testing, or they were good at testing. Of course, this may be an observation bias, since I am seeing people who followed my own path. It's an interesting topic to look at further.

The second topic was focusing on how software testers train and learn about their jobs, and how they find the time to do it. For many of the people, they make an effort to carve out time that will be relevant to them. For me personally, I like to use my time on the train when I commute to and from work to read, think, and ponder ideas. Others use materials like Coursera or uTest University. Some people enjoy writing blogs (like me ;) ). Key takeaway, everyone has a different way to learn.

The third topic focused on how people get through the large amount of material at a conference. How do they capture it all? there's lots of techniques. My favorite technique is writing blog posts in semi-real time (like this one). I find that, instead of writing a summary of what I am hearing, I try to write a summary and a personal take on the details I have learned, so that it is more personal and actionable. Others use sketch notes, doodles, mind maps, recordings of talks, etc. The key takeaway is not the method in which you capture, but that you make what you capture actionable.

The fourth topic is moving into consulting or contracting, and what it takes to make that work. Several of the participants shared their experiences as to how they made the transition, and the challenges they faced. In addition to doing the work, they had to work with finding gigs, collecting money, doing paperwork and tax filing, etc. At times, there will be unusual fits, and needs that may or may not be an ideal situation, but a benefit of being a consultant is that you are there for a specific purpose and for a specific time, and at the end of it, you can leave. There's a need to be able to deal with a high level of ambiguity, and that ambiguity tends to be a big hurdle for many. On the other side, there is a mindset of learning and continuous pivoting, where there's always something new to learn. the pay can be good, but it can be sporadic. Ultimately, at the end of the day, we all are consultants, even if we are employed by a company and we are getting a paycheck.

The final topic is "are numbers evil or maligned". Some people look at numbers as a horrible waste of time. Don't count test cases, don't count bugs, don't count story points. In some ways, this quantitative accounting is both aggravating, but somewhat necessary. Numbers provide information. Whether that information is bad or good depends a lot on what we want to measure. If we are dealing with things that are consistent (network throughput, megabytes of download, etc.) collecting those numbers matters. Numbers of test cases, number of stories, etc. are much less helpful because there is such a variation as to what and how we can control those values. When the measurement cannot be actionable, or it's really vague, then the numbers don't really make a difference. Numbers can be informative, or they can be noise. It's up to our organizations (and us) to figure out what is relevant and why.

Thanks to all the participants for being involved at this early hour :).

---

The morning keynote for today was delivered by Carol Strohecker of the Rhode Island School of Design, and the topic was "From STEM to STEAM: Advocacy to Curricula" and the focus was on the fact that, while we have been emphasizing STEM (science, Technology, Engineering and Mathematics), we miss a lot of important details when we do not include "Art" in that process.

STEAM emphasizes the importance of art and design and how it is important to innovation and economic growth. Tied into this is the maker movement, which likewise emphasizes not just the functional but the aesthetic. There's a lot of neat efforts and initiatives taking place at the academic level as well as the legislative level to get these initiatives into schools so that we can emphasize this balance.

There are many tools that we can use to help us look at the world in a more artistic and aesthetic approach. Art is nebulous and subjective. It does not have the same level of solid concrete syntax that science or language has (and language is pushing it). Much of the variance in the artistic leads to a development of a particular skill or attribute that we call "intuition". It's not concrete, it's not focused, it's not based on hard data, but it informs in ways that are just as valuable. Artistic endeavors help to develop these traits. Cutting the arts out of our economic vision puts us at a significant disadvantage.

[I had to duck out at this point to take care of some AST business, so I can only give you my take on the actual details I heard discussed. Sorry for the gap.]

One of the quotes shared that makes great sense to me comes from Immanuel Kant:

"The intellect can intuit nothing, the senses can think nothing. Only through their union can knowledge arise."

It feels like this talk is resonating with many people, as the Open Season for this talk is vigorous and active. An emphasis on synthesizing the inputs and the areas that we interact with makes for a richer and greater whole. It takes a different level of thinking to make it happen, but there are amazing opportunities for those willing to stretch into new disciplines. Many books have been suggested, including "Inventing Kindergarten", which talks about the importance of play and discovery to learning and skill acquisition/development.

James Bach made an interesting comment. With the emphasis on STEM, and now STEAM, who gets left behind now? Are there other areas that are now orphaned and unfunded, or is it that these areas  have been unfunded for so long that they cry out for help? Does it make sense to work with a small group, or do we need to consider that STEAM is meant to be an all encompassing discipline focus?


---


Up next is mine and Harrison's talk... for obvious reasons, I will not be live blogging then ;). I do however encourage everyone to tweet comments or even ask questions, and I'll be happy to follow up and answer them :).


---

During lunch we had the results of the test challenge and we also had the results of the Board of Directors election.

Returning members of the board:
- Markus Gärtner
- Keith Klain (re-elected)
- Michael Larsen
- Pete Walen

New Members:
- Erik Davis
- Alessandra Moreira
- Justin Rohrman

Congratulations to everyone, I think 2014-2015 will be an awesome year :).

Ben Simo took the stage after lunch to talk about the messy rollout of the healthcare.gov web site and all of the problems that he alone was able to find with the site. Ben made a blog to record the issues that he found, and that received a *lot* of attention from the media and from the government as well.

For the details of each of the areas that he explored, you can see the examples he posted on http://blog.isthereaproblemhere.com/. What I found interesting was the fact that as Ben tested and logged his discoveries, it showed just how messed up so many of the areas were, and how much of the efforts Ben applied helped discover some strange issues without even trying. Ben was not asked to speak to Congress or to testify, and he did not find that there was any government action from his efforts, but he became the target of DDOS attacks and media outlets were calling him very regularly.

Ben has on the side of the "is there a problem here" site a variety of test heuristics are listed, and he applied most of those heuristics to help uncover the bugs he found. Many of the issues discovered fit into the specific heuristics. Listed here they are:

CONSISTENCY HEURISTICS
from James Bach and Michael Bolton

H istory
I mage
C omparable Products
C laims
U ser Expectations
P roduct
P urpose
S tatutes
-F amiliar

FAILURE HEURISTICS
from Ben Simo

F unctional
A ppropriate
I mpact
L og
U ser Interface
R ecovery
E motions

SECURITY VULNERABILITIES
the OWASP Top 10

1. Injection
2. Broken authentication and session management
3. Cross-Site Scripting
4. Insecure Direct Object References
5. Security Misconfiguration
6. Sensitive Data Exposure
7. Missing Function Level Access Control
8. Cross-Site Request Forgery
9. Using Components with Known Vulnerabilities
10. Unvalidated Redirects and Forwards

What was also amazing to see was that all of these issues were discovered/reported using nothing more than this own data. He said he would not try to do anything to access other people's information, and he did not. even then, he still found plenty of issues that should have made the healtcare.gov team both very nervous, and very grateful.

---

Geoff Loken tickled my interest with the talk titled "The history of reason; arts, science, and testing". As one who finds philosophy and all of its iterations over the millenia fascinating, I've found the different ways that reason and intuition developed over the ages to be a worthwhile area to study and learn about. Ultimately, philosophy comes down to epistemology, and epistemology led to the scientific method (observation/conjecture, hypothesis, prediction, testing, analysis).

Observation cannot tell us everything. There are things we just can't see, hear, touch, smell, or taste. at those times, we have to use reasoning skills to go farther. The scientific method does not prove that things exist, but it can disprove to a point the nature of an items existence.

Geoff played a clip from Monty Python and the Holy Grail (the witch scene) which showed what could be considered a case of bad test design. Ironically, based on the laws and understanding of the world at the time, they performed tests that were actually in line with their standards of rigor. We simply have codified our understanding of more disciplines since their time.

One of the other aspects that comes into play when we are testing is that science can quantify, but it can't qualify. There's aspects to testing that we need to look at that go beyond the very definable and specific data aspects.

Overall, I found this to be a lot of fun to discuss, and it reminded me of many of the historical dilemmas that we have faced over the centuries. We look at what appears to be totally irrational actions in centuries past. It makes me wonder what from our current time will look irrational 500 years from now ;).

---

The last session I attended today was Justin Rohrman's talk "Looking to Social Science for Help With Metrics". Metrics is considered a dirty word in many places, and that disparaging attitude is not entirely unjustified. Metrics are not entirely useless, but the measurement of them in the right context is important. If they are not used in the correct context, they can be benign at best and downright counter-productive.

By focusing on the metrics that actually matter, we can look at measurements that can tell us how to learn about the systems we use and learn where we are in the life of the product. Some of the context-driven measurements that we could/should be looking at include:


  • work in progress
  • cycle time
  • lead tile
  • touch time
  • slack
  • takt time
  • time slicing
  • variation
  • Find -> Fix -> Retest loop

These measurements intrigue me, and they seem to be much more in line with what could actually help an organization. These fit well into what is referred to as the Lean Model.  Lean focuses on measurement for improvement. This is in contrast with using not to be confused with measurement for control.

I'm currently fortunate to work for a company that does not require me to chase down a number of useless metrics, though we have a few core metrics that we look at. These examples give me hope that we can get even more focused on measurement values that are meaningful. I'll definitely bring the context-driven list to my engineering team, or better yet, try to see if I can derive them and report them myself :).

---

Tim Coulter and Paul Holland wandered about the venue and checked out a bunch of the talks, and he decided to check out as many of the talks as he could to share some "TimBITS" and takeaways.  Some takeaways:

- Direct your testing keeping the business goals in mind
- Tighten up the feedback loop, it will make everyone happy
- Write your bug reports as if it were for a  memory-wiped feature you
- When you add a test specialist into a development team, everybody wins
- Skills atrophy: Testing skills must be used or you will lose them
- Social sciences all play a part in testing, it's not just technology
- Testers appear to be hardwired to play games
- Art has an impact on Software Testing
- Good mentoring is hard. Answer your mentees questions with more questions
- It's a sin to test mobile apps sitting at your desk
- To be a good tester you need to be a "sort of skilled hacker"
- Pair with people in all roles they will all give you different insights
- You can't prove quality with science, but you can prove facts that may alter judgement
- Try to show the need for what you want to teach before you try to teach it
- "Hey, these aren't just testers but real people!"

Many of the participants are sharing their own takeaways, and they are covering many of the ones already mentioned, but they are showing that many people are seeing that "there is an amazing community that looks out for one another and actively encourages each other to do their best work".

---

It's been fun, but all good things must come to an end. We are now at the closing keynote, given by Matt Heusser, and the title for this one is "Software Testing State of the Practice (And Art! And Science!)".

First thing that Matt says that he is seeing is that there is a swing back to programming, and back again. The debate of what testers should be doing, what kind of work they should be doing, and who should be doing it is still raging. Should testers be programmers? Some will embrace that, some will fight it, and some will find a place in the middle.

Automation and tech is increasing in our lives (there are supermarkets where self-checkout is becoming the norm). The problem with this prevalence is that we are losing the human touch and interaction. Another issue is that testing looks to be a transient career. Seven years from now, it's entirely likely that more than 50% of the people who are here attending their first CAST will not even be in test 7 years from now.

Another issue that we are seeing is the Fragmentation within Testing. What does test even mean? What is testing, and who has the final say on the actual definition? We have a wide variety of codebases and strategies of how we code and how we test. All of this leads to a discipline that many people don't really understand what it is that the test teams do. We are a check box that needs to be marked.

The Agile community in the early 2000s we in a similar situation, and they dared to suggest they had a better way to write software. That became the Agile movement, and that's changed much of the software development world. Scrum is now the default development environment for a large percentage of the software development population. Scrum calls for testing to be done by embedded members of the team, and not by a separate entity.

Matt refers to three words that he felt would change the world of software testing. Those words are:

- Honesty
- Capable
- Reach

We need to be honest with our dealings and we need to show and demonstrate integrity. We need to prove that we are capable and competent, and we need to reach out to those who don't understand what we do or why we do it.

I added a comment to this talk in open season when asked about why testers tend to move out of testing, and I think that there's something to be said that testers are broad generalists. We have to be. We need to look at the product from a wide variety of angles, and because of that, we have a broad skill set that allows us to pivot into different positions, either temporarily or as a new job. I personally have done stints as a network engineer, an application engineer, a customer support representative, and even a little tech writing and training, all the while having software testing as a majority of my job or as a peripheral component of it. I'm sure I'm not an isolated incident.

There's a lot to be said about the fact that we are a community that offers a lot to each other, and we are typically really good at that (giving back to others). As I said in a tweet reply, if we inspire you, please bring the message back to your friends or your team. Let them know we are here :).


No comments: