Showing posts with label STP-Con. Show all posts
Showing posts with label STP-Con. Show all posts

Thursday, April 4, 2019

Data: What, Why, How??: an #STPCon Live Blog Entry


All good things must come to an end and due to a need to pick up family members from the airport today, this going to have to be my last session for this conference. I'm not entirely sure why this is but Smita seems to be in the closing sessions at STP events. Thus Smita Mishra will be the last speaker I'll be live blogging for this conference. My thanks for everyone involved in putting on a great event and for inviting me to participate. With that, let's close this out.

Data is both ubiquitous and mysterious. There are a lot of details we can look at and keep track of. There is literally a sea of data surrounding our every action and expectation. What is important is not so much the aggregation of data but the synthesis of that data, making sense of the collection, separating the signal from the noise.

Data Scientists and Software testers share a lot of traits. In many cases, they are interchangeable terms. Data and the analysis of data is a critical domain inside the classic scientific method. Without data, we don't have sufficient information to make an informed decision.

When we talk about "Big Data" what we are really looking at are the opportunities to parse and distill down all of the data that surrounds us and utilize it for special purposes. Data can have four influences:


  • Processes
  • Tools
  • Knowledge
  • People

Data flows through a number of levels, from chaos to order, from Uncertainty to certainty. Smita uses the following order; Uncertainty, Awakening, Enlightenment, Wisdom, Certainty. We can also look at the levels of quality for data: Embryonic, Infancy, Adolescence, Young Adult, Maturity. To give these levels human attributes, we can use the following; Clueless, Emerging, Frenzy Stabilizing Controlled. In short, we move from immaturity to maturity, not knowing to know for certain, etc.

Dat can likewise be associated with its source; Availability, Usability, Reliability, Relevance and the ability to present the data. Additionally, it needs to have an urgency to be seen and to be synthesized. Just having a volume of data doesn't mean that it's "Big Data". The total collection of tweets is a mass of information. It's a lot of information to be sure but it's just that, we can consider it "matter unorganized". The process of going from unorganized matter to organized matter is the effective implementation of Big Data tools.

Dat can sound daunting but it doesn't need to be scary. It all comes down to the way that we think about it and the way that we use it.

Testing Monogatari: a #STPCon Live Blog Entry



Ah yes, the after lunch stretch. I will confess I'm a tad bit drowsy but what better way to cure that than to have a rapid-fire set of five-minute test talks.

We've had several speakers play along with a variety of topics:

Brian Kitchener getting hired into a QTP shop with a background in Selenium. His idea was talking about how to make organizational changes when they are a big ship and you are just one person. The answer is lots of little changes to help prove that making small scale changes and proving their benefits can be scaled to make larger changes.

Brian Saylor talked about the idea that "CAN'T" is a four letter word. To Brian, can't is the most offensive word.

"I can't do that" really means "I don't want to do that".
"It can't be done" often means "I'm too lazy to do this"
"You can't do that" often means "I can't do that, so therefore you shouldn't be able to"

Thus Brian asks us to be sensitive and to strike the word "can't" from our vocabulary and see what it is we are really saying.

Tricia Swift talked about changes she has seen over the last twenty years. The major changes she noticed was that there was about ten percent of her MIS class who were women. Compared to her CS friends, she was a lot more represented. She is happy to see that that has changed. ISO compliance and Waterfall was everything twenty years ago (and oh do I remember *that*). Thankfully, much of that has changed, or at least the approach has changed. Most important, she is seeing women in development where twenty years ago there were very few.

Raj Subramanian wants to make it clear that "It Is OK If You Mess Up". The issue with a culture where messing up is punished means that nothing creative or innovative will be tried. Cultures that penalize people for making mistal=kes ensure that they will remain stagnant. Raj basically shared an example where a test had an effect on production (an airline app) and the learning experience was that his rogue test exposed a series of policy problems with their production server. Raj still has that job and the product was improved, all because a mistake was acknowledged and accepted.

Anan Parakasan shared his experiences with the conference and the development and focus on "trust". It's a word that has a lot of different meanings and isn't exactly easy to nail down. Anan shared ways that he felt that trust could be better fostered and developed for our teams. Additionally, developing Emotional Intelligence will help considerably by making a team more effective. Think less adversarial and more collaboratively.

Final talk is Heath (?) and he talked about "feature flag insanity" and the fact that his organization has gone a bit trigger happy with their feature flags. Having them means running with them on, of and not there. However, his point was that the flag had contexts that were understood by some people but not others. Needless to say, the feature flag had an option to fail every credit card transaction. Not being able to find it and have it rolled out to production meant that every credit card transaction in production was failed and that cost a whole lot of money. In short, know your system, document it and make sure everyone knows what the feature flags do.

Came down to Raj and Brian Kitchener and they both have $50 Amazon gift cards. And with that, I'm much more awake ;).












Making the Move to Continuous Testing: a #STPCon Live Blog Entry


Sorry for the gap in the 10:00 hour but I took advantage of the opportunity to talk with a vendor that we actually work with during that hour to discuss an issue we've been dealing with. Kudo to them for taking the time, letting me vent and demonstrate and figuring out next steps.

With that, I'm definitely interested in seeing what is happening with Continuous Testing. I first started reading about this topic almost ten years ago and it is interesting that it is still a hot button topic.

We do a lot with Continuous Integration (CI) where I am and have made strides with Continuous Delivery (CD). Now I'm curious as to the possibility of implementing Continuous Testing (CT).

Alissa Lydon works with Sauce Labs, and she is starting with getting a better understanding of what CT actually means. It doesn't mean that automated tests are running constantly (though automation is a part of the discussion). It does mean testing at every stage of development.

The first step to getting CT to have a fighting chance is that everyone has to be committed to quality, which means every level of the development process needs to think about testing and staying ahead of the curve. Think of having programmers and testers sitting together, pair navigating and pair programming/testing to help get testing principles and approaches into the process of writing software before line one of code even gets written. The key is that everyone needs to be on the same page and as early as possible.

Step two is testing at every stage of the development cycle. No code should pass through any stage of development without some level of testing being performed. This can range everywhere from requirement provocations to unit tests to integration tests to end-to-end tests, whatever makes sense at the given point in time.

Let's segue here and mention that continuous testing does not necessarily mean automated testing. Automation can make sense at a variety of points but you can have a mix of automated and live/manual tests and still be considered CT. There is also a balance of the levels of testing that will be done at any given time.

Next is being able to leverage the skills of the team so that automated testing can advance and get put in place. While Automation is not the sole criteria for CT, it is definitely important to be able to make CT work.  It will take time and attention and it will certainly take some development chops. Automation needs to be treated every bit as important as the production code and I'll fight anyone who says otherwise ;).

Additionally, a breadth of devices and platforms are critical to getting a realistic view of how your environment will look on a broad cross-section of resolutions and sizes, as well as user agents.

The ability to scale the systems being tested is important. Yes, we may start with just one machine, but ideally, we want to be able to simulate our production environment, even if we go with a smaller number of machines and try to extrapolate what a larger number would provide.

The final step is implementing and actualy using/reviewing analytics. In short, know what your product is doing and what is being used to focus efforts.




Testing all the World's Apps with AI: an #STPCon Live Blog Entry



We are here on the last day of STP-Con Spring 2019. Today is going to be nice for me in that I won't have any responsibilities short of listening and typing these blog posts. It feels so good to be "done" when it comes to speaking. I enjoy it, don't get me wrong, but there's a level of focus and energy that takes up all of your focus until you're finished. I'm happy to be on the other side for the rest of the conference.


This morning we are listening to Jason Arbon talking about testing with AI. Part of the puzzle to use AI is to teach machines the following concepts:

See
Intent
Judge
Reuse
Scale
Feedback

These are concepts our machines have to come to grips with to be able to leverage AI and have us gain the benefits of using it. Our machines need to be able to see what they are working with, they need to be able to determine the intent of tests, to judge their fitness and purpose (machines making judgment calls? This might make a number of my long-held assertions obsolete ;) ). Machines need to be able to reuse the code and the learnings that they have been able to apply. The systems need to be able to scale up to run a lot of tests to gather a representative set of data. Finally, the scripts and apps need to be able to respond to the feedback loop.

Jason is showing examples of "Automated Exploratory Testing". Wait, what? well, yes, there are some things we can do that would certainly correspond with what we would call classic exploratory testing. At least we can look at a variety of examples of what we would normally do. Sure, a human could be super random and do completely whacked out things, but most of us will use a handful of paths and approaches, even if we are in exploratory mode. Thus, it is possible to look at a lot of "what if" scenarios. It requires a lot of programming and advanced work, but at some point, we can collect a lot of data to do certain things. There's a convergence force at play so yeah, I guess bots can do exploratory testing too :).


Wednesday, April 3, 2019

Using Agile to Adapt to Changing Goals: an #STPCon Live Blog Entry



Using Agile to Adapt to Changing Goals

There's an old joke that goes around that says, basically, "If you don't like the way things are today, come back tomorrow". Meaning the only true constant is change and change happens a lot. We all have choices to be a driver of change or to have change be driven onto us. To this idea, any goals that your team has made are just as likely to change.


Sue Jagodzinski described her own situation of change within her company and the chaos that resulted from it. Part of her story described the challenges of managing two teams that were focused on different things. Both teams needed to use and support the adoption of a test automation tool and to be able to enhance and support that tool. One team focused on build, test, and automation. The other team focused on training, support, and documentation. While the tool they were working on was the same, they had very different missions and purposes and that would show up in the backlogs they built up and the priorities they placed on their respective backlogs.

Here's a bit of nerdy for you. The training and support team decided to call themselves Team1. In retaliation, the other team called themselves Team0 because they understood how arrays were indexed (and yes, this was a real dynamic between these teams. I thought she was kidding, she assured me she was not.

To this end, Sue determined that there were some key danger signs happening here. By segregating responsibility between two teams, there was an unhealthy competition that developed. trust issues developed along the way. when there issues there was plenty of finger pointing to go along with it. Most visibly was the fact that there were conflicts where teams would determine what to work on was guided by whatever the other team was not working on.

By moving to a shared backlog, the teams came into immediate conflict and had to negotiate how to change that dynamic. Some of the areas that Sue addressed was:

- how can we determine the skills that were needed on each team, and institute necessary moves if needed?
- determine the soft skill levels of the team members, who were the leaders? Who could become a new leader if needed?
- who would be best served to go elsewhere?
- how could we restructure the teams to be less contentious?

The last one was easy-ish to solve by changing names. the two teams for purposes of this presentation were "Awesomation" and "Thorium". Both teams agreed to go to a single backlog. Teams were set up so that both teams had technical expertise in the test framework. More to the point, an emphasis was made to reward and encourage those that would share their knowledge and skill sets. By doing this Sue was able to get close to equalizing the team. By sharing a backlog, the grooming sessions were done together, with their expected challenges. Sue took over as the product owner and she had to learn what that entailed. She described the process as being "harder than many people might realize" in addition to getting a better knowledge of the product.

The net results of this process, though not easy, were that they had the ability to learn how to do any task on either of the teams. In other words, two teams with Generalizing Specialists (service mark Alan Page ;) ). In the process, each of the team members engagement increased, different perspectives were heard, learning and reflection were done in the retrospectives, and the teams learned to progress together. Perhaps the most valuable skill they both discovered they could do was to adapt to priorities and be able to pivot and change if/when necessary.

Don't Make Yourself Obsolete: an #STPCon Quasi Live Blog Entry


Design Inclusively: Future Proof Your Software

Since it will be impossible to have me post on my own talk, I'm going to give a pre-recorded recollection and thoughts about my talk and what I'm hoping to impart with it.

Accessibility deals with the ability to design software so that it can work with technologies to help people with various disabilities use applications that they otherwise would not be able to use. 

Inclusive Design allows programmers to create and design websites and applications that are available to the largest population possible without having to rely on external technology necessary for sites to be Accessible

Inclusive Design and Accessibility go hand in hand and are complementary endeavors but Inclusive Design, done early, can help make the last mile of Accessibility that much easier. That's the key takeaway I want to convince people to consider and advocate for. 

Inclusive Design is not magic. In many cases, it’s taking work that has already been done and making some small but significant changes. New web technologies help to make Inclusive Design more effective by utilizing semantic enhancements. More important, making this shift can also help you make better design choices in the future, without having to bolt on or re-architect your existing code, possibly at great cost in time, energy and finances. Sadly, we are still in a model where Accessibility/Inclusive Design is driven by two specific parameters:

- how much money do we stand to gain from doing this (because big deal pending and customer paying is demanding it)
- how much money do we stand to lose from not doing this (because we're actually being sued for being in violation of various disabilities acts)

Fact is, we can't really say what technology will be like in five, ten, or twenty years. We can, however, with great certainty, understand what we are likely to be like in those same time frames. When I talk about future proofing software. I don't mean from a technological factor, I mean in a usage factor. We're not future proofing for machines. We are future proofing for US! At some point, every one of us will leave the happy sphere of what is commonly called "normative". For some, it's never been a reality. For many, the cracks in that sphere start to appear around age 45. Seriously, I didn't care much about Accessibility or think much about it before I turned 45 and received the gift that keeps on giving (i.e. the need for reading glasses). That was my first step into the non-normative world of adaptive needs and being a target audience for Accessibility as an everyday part of life. I can assure you it will not be my last.

There are a variety of things that can be done and truth be told they do not have to be radical changes. Very often people will look at Accessibility and Inclusive Design changes and they will say "hey, wait a minute, we applied all of these changes and I don't see any difference." Right! That's the whole point. Accessibility and Inclusive Design doesn't have to be ugly or inelegant. I'd argue that Accessible and Inclusive software is actually more beautiful because its form is enhanced by its function.

Oh, and for those who have never seen my presentation, without spoiling the surprise, I'll share a phrase out of context that speaks volumes:

"IKEA GETS IT!!!"

Testers as Their Own Worst Enemies: an #STPcon Live Blog

Testers as Their Own Worst Enemies

Michael Bolton and I share a number of interests. We're both musicians and we have often talked about music and riffed with each other at conferences over the years. Michael starts out his talk with a clearly personal example. There's a, (rightly so, I believe) critical eye being placed on a variety of streaming services that are making a lot of money on technology to share music.

Those companies are leaving the key players of that product (i.e. the musicians) out of the equation, or at least are not compensating them in any way commensurate with their contribution. What happens when musicians are not ultimately compensated for their efforts and creativity? You either get less music or you get less quality music (a term that is hugely problematic and not something I will address; everyone has opinions on what good music is).

In many ways, software testers are feeling a bit like the musicians in the above example. Think about making music without musicians. Once that was considered unthinkable. Today, anyone with a laptop and some scripting skills *can* write a hit song (not that they necessarily will, but they absolutely can). What happens when we take musicians out of the equation of creating music? What do we lose? Likewise, what happens when we take testers out of the equation of making software? Some may argue that I'm stretching this analogy since programmers are closer to musicians than testers are but hey, my rules, I'm making them up as I go ;).



To torture this metaphor a little more, I want to make a plug to a person I think every tester should know. Truth be told, I do not know his real name. I only know him as "Blue".


Seriously, though, "Blue" is what I refer to as a "stand up philosopher" and in many ways, he's a great resource that I think any software tester will find value in both hearing and reading. Blue is young, like, I could be his Dad. Seriously. Still, Blue has a way of looking at the world and exploring how societies work that can be hugely helpful to a tester. He is also great at breaking down challenging topics and making them sound simple and I think this is the skill that every tester would benefit from. Seriously, check him out (end weird side tangent ;) ).

Testers need to be something other than just people who run tests. If that's all we bring to the table then we are ultimately expendable. We need to look at we actually do for a company. We test, sure, but if that's all we do, we are of limited value. If, however, we are capable of articulating what the issues are and why they would be positive or negative to the organization, using our brains and our persuasion, then we have some leverage. To that end, I will say that testers need to be stand up philosophers in our own right (see, I had a reason for pulling Blue into this conversation ;) ). When the dialogue about testers being social scientists comes up, this is what is meant by that. When we talk about risk, we need to humanize it. We need to make it relatable. We need to feel as though the issues affect us and others because, ultimately, they do.

Ultimately those of us that want to play in the testing game (for whatever reason) are going to have to make a case for the humanity that we provide. If we cannot or do not make the case for it, then we are effectively saying we are totally expendable. Testers need to stop looking at the areas where they can be farmed out or mechanized and draw attention to the areas that they really do provide value. Our mechanical output is not really that special. If it can be repeated, it can be automated, at least at the mechanical and procedural level. What can't be automated? Thoughtful synthesis and informed advocacy for doing the right thing and why our course of action would be right.

To borrow from my own advocacy area, I can talk about the fact that code can be checked for compliance when it comes to accessibility. Can I do that? Absolutely. can code do that? It sure can and probably a lot faster than I can. Can a system make an informed decision that the experience that a disabled person will have is comparable to one that a normative user will have? At this point in time, no machine can do that. You need people for that. People are good at advocacy. People are good at learning. People are good at making judgment calls. Testers would be well advised to place their efforts and emphasis on those humanities more so than the "Techne" of what they do.


QA/QE Supporting DevOps: an #STPCon Live Blog Entry

The QA/QE Role: Supporting DevOps the Smart Way

First off, Melissa Tondi is doing something I fully intend to steal. There are varying thoughts and approaches to having an introductory slide that introduces the speaker. Some don't use one at all. Some are required to do so at certain conferences. Melissa does something that I think is brilliant, funny and useful. Her first slide after the title simply starts with "Why Me?"

In short, Melissa is spelling out not who she is, or what her credentials are, but rather "you are here because you want to learn something. I want to give you the reasons why I think I'm the right person for that job here and now for you." Seriously, if you see me doing this at a future conference, props to Melissa and you saw it here first ;).



One of the avenues that Melissa encourages is the idea of re-tuning the methodologies that already exist. One aspect that I appreciate is Melissa's emphasis on not just QA (Quality Assurance) but also QE (Quality Engineering). They are often seen as being interchangeable, but the fact is they are not. They have distinctive roles and software testers frequently traverse both disciplines. The title is not as important as what is being done. Additionally, a key part of this is the ability to balance both technical acumen and user advocacy. In short, push yourself closer to Quality Engineering so that you can be an influence on the building of the software, even before the software gets built.

Introducing DevOps to an organization can be a wild ride since for so many people we don't even know what Dev Ops is. Melissa is using Anne Hungate's definition of "The collapse and automation of the software delivery supply chain". For many, that starts and ends with building the code, testing the code and deploying the code. The dream is a push button, where we press the button, everything is magic, and the software rolls out without any human interference. Sounds great and believe me, the closer we get to that, the better. We will step away from the fact that certain people won't be able to do that for practical business reasons but still having the ability in all of the key areas is of value.

There are some unique requirements in some countries and companies to have a title of "Engineer". That's a term that has a certain level of rigor associated with it and it's understandable that some would shy away from using an Engineering extension where it's not formally warranted. For this talk, let's set that aside and not consider QE as an official title but more as a mindset and a touch point for organizing principles. In short, you can be a QE in practice while still holding a QA title. Engineering presupposes that we are developing processes and implementing approaches to improve and refine work and systems.

On area that is definitely in the spotlight is test automation. A key point is that test automation does not make humans dispensible or expendable. It makes humans more efficient and able to focus on the important things. Automation helps remove busywork and that's a great place to apply it. Additionally, it's possible to automate stuff that nets little other than make screens flash and look pretty. Automating everything doesn't necessarily mean that we are automating important or intelligent items. Automation should get rid of the busy work so that testers can use their most important attribute (their brain) on the most important problems. Additionally, it's wise to get away from the "automate everything" mindset so that we are not making a monolithic monster that by its sheer weight and mass makes it unwieldy. By parallelizing or parameterizing tests, we can organize test scripts and test cases to be run when it's actually important to run them. In short, maybe it makes more sense to have "multiple runs" to come to a place of "multiple dones" rather than "run everything just because".

Use automation to help define what is shippable. There shouldn't be an after the fact focus on automating tests if they are actually important. By focusing on automation earlier in the process, you get some additional valuable add-ons, too. You limit the accrual of technical debt. You shake out issues with unscripted testing first. More to the point, you can address testability issues sooner (yes, I've mentioned this multiple times during this conference. I completed "30 Days of Testability" and now I have it clearly on the brain. Testability should be addressed early and it should be addressed often. the more testable your application, the more Automizeable the application will be (Oh Alan Richardson I have so fallen in love with that word ;) (LOL!) ).


What not to Test: an #STPCon Live Blog Entry


“Help, I’m Drowning in 2 Week Sprints, Please Tell Me What not to Test”

This talk title speaks to me at such a profound level. Be warned, I may veer into tangent territory here. There are what I cal "swells" that come with sprint madness and sprint fatigue. It's never constant, it's like a set of waves that you would look to time. For those familiar with surfing, this likely makes sense. For those not familiar, waves tend to group together and swells grow and shrink. These series of increasing and decreasing waves are referred to as "sets" and the goal is to time the set that feels good to you. Too early and you don't catch a wave. Too late and the wave wipes you out. In between is the rideable waves to drop in.

Sprints are the software mental metaphor that goes with "timing the waves" but the problem is that the timing of sprints is constant, but wave sets are not. Likewise, even figuring out what matters for a sprint may take more or less time any given sprint. Tasks like backlog grooming, story workshops, sprint planning, etc. all come down to making sure that we have an understanding of what matters and what's actually available to us.

Risk-based testing is the idea that we focus our attention on the areas that present the most potential danger and we work to mitigate that. We all know (or should know) that we can't get to everything. Thus we need to focus on the areas that really matter.

Mary recommends that we place emphasis on testing ideas. Testing ideas should go beyond the acceptance criteria. We can easily be swayed to think that focusing on the acceptance criteria is the best use of our time but often, we discover that with a little additional looking, we can find a variety of problems that simply looking at acceptance criteria won't cover. We also need to be aware that we can also range far afield, perhaps too far afield if we are not mindful. Test ideas are helpful but don't just play "what if" without asking the most basic question of "which would be the riskiest area if we weren't to address?"

An area that I find that happens (tangent time) is that I will be testing something and find that we have to deal with an issue related to our product ut has nothing to do with stories in play. As I am the owner of our CI/CD pipeline (note: that doesn't mean I'm the expert, just that I own it and I am the one responsible for it working properly). If something happens to our CI/CD pipeline, who do you think is the first person to spring into firefight mode? Are you guessing me? Congratulations! In a sprint, I don't have the luxury of saying "oh, sorry, I can't deal with pipeline issues, I have to finish testing these stories". Therefore, any time I have issues such as a pipeline problem that needs to be addressed, I immediately put a spike into the sprint. I do my best to consider how much time it will take and if I can handle it myself (often the case) or if I need to pull in development or ops resources (also often the case).  What happens over time is that we get a clearer picture of not just actual testing focus but also the legitimate interruptions that are real and necessary to deal with. In a sprint, there is a finite amount of time and attention any of us can spend. Time and attention spent on one area necessitate that it is not spent elsewhere and no, saying you'll stay up later to cover it is robbing your future self of effectiveness. If you are doing that, STOP IT!!!

Performing a test gap analysis is also helpful. In a perfect world, we have test cases, they've been defined and we have enough information to create automated tests around them as the functionality is coming together. Reality often proves to scuttle that ideal condition, or at least it means that we come up short a bunch. What we often discover is a range of technical debt. Areas may be well covered and easily documented with test cases and automated tests. Other areas may prove to be stubborn to this goal (it may be as simple as "this is an area where we need to spend some time to determine overall testability).

The Pareto Principle is a rule of thumb, it's not absolute. Still, the old adage that twenty percent of something is going to give you eighty percent of outcomes is remarkably resilient. That's why it's a rule of thumb in the first place.

Twenty percent of test ideas can help you find eight percent of the issues.
Twenty percent of the application features will be used by eighty percent of the customers.

What does this mean? It means you need to get a read on what's actually being used. Analytics and an understanding of them are essential. More important, using analytics on your test systems is important, not just the prod numbers. One thing that was driven home to me some time back was the fact that analytics need to be examined and the configurations need to be experimented with. Otherwise, yes, you can have analytics in place but do you actually know if you have them turned on in the right places? How would you know?

One more interesting avenue to consider is that you cannot test everything but you can come up with some interesting combinations. This is where the idea of all pairs or pairwise testing comes into play. Testers may be familiar with the all pairs terminology. It's basically an orthogonal array where you take a full matrix and from that matrix, you look at the unique pairs that can be created (some feature paired with some platform as an example). By looking for unique pairs, you can trim don a lot of the tests necessary. It's not perfect, and don't use it blindly. Some tests will require they be run for every supported platform and not doing so will be irresponsible. Still, prudent use of pairwise testing can be a huge help.




One Thing Testers Can Learn from Machine Learning: an #STPCon Live Blog Entry


Today's keynote is coming to us courtesy of Mary Thorn. Mary's been in the game about twenty years including a stint as a COBOL programmer relative to the Y2K changes that were happening at the time. Mary is also going to be doing a second talk a little later, so she's going to be in this blog twice today.

All right, let's get a show of hands. How many people have seen a marked increase in hearing the term Machine Learning? How many people feel that they understand what that is? It's OK if you are not completely sure. I feel less sure each time I see a talk on this topic. Let's start with a definition. Arthur Samuel defined it as: “Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed.”  The key here is that the machine can learn and can then execute actions based on that learning. This includes a couple of terms, supervised and unsupervised learning. Supervised learning is the process of learning something that maps an input to an output based on example input-output pairs. Each example is a pair consisting of an input and the desired output. Unsupervised learning groups unlabeled and unclassified data and by using cluster analysis identifies commonalities in the data and reacts based on the presence/absence of commonalities.



OK, that's fascinating but what does this actually mean? What it means is that our systems are not just the dumb input/output systems we tend to look at them as. There's a change in the skills we need to have the ways that we focus on our testing efforts. More to the point, there's a variety of newer skills that testers need to develop. It may not be necessary to learn all of them but testers (or at least test teams) would be well suited to develop levels of understanding around automation, performance, dev ops, exploratory testing, and pipeline CI/CD skills. Those skills may or may not reside in the same team member but they definitely need to reside in the team.

There are a couple of things that are helpful to make sure that this process leads to the best results.  The first is that there is a specific goal or set of goals to be pointing towards. In the process, we need to look at the outputs of our processes and examine what the data tells us and following it where it leads. Be sure, we may learn things we don't really want to know. Do we have defined tests for key areas? How are they being used? Do they matter in the first place? What are the interesting things that jump out to use? How can we help to determine if there is a cluster of issues? This is where exploratory testing can be a big help. Automation can help us consolidate the busywork and gather things together in areas. From there, we as individual testers can look at the data output and look for patterns. Additionally, and this is apropos to the 30 Dys of Testability focus I jumped through in March, we can use the data we have received and analyzed to help us determine the testability of an application. Once we determine areas where testability might be lacking, we should do what we can to emphasize and increase the overall testability of our applications.

Analytics are hugely helpful here. By examining the analytics we can look at areas where we can determine what platforms are used, what features actually matter, and what interactions are the most important. In short, let's make sure we learn what our app is actually doing, not just what we think or want them to do.




Getting underway at #STPCon Spring 2019


Welcome everyone.

We are just a little bit away from getting the conference started. The workshop days are over and with them my “quieter than normal streak. Today starts the multi blog entries that you either love or dread ;).

A couple of reminders. First, when I live blog it’s stream of consciousness. I may or may not have clean posts. Grammar may be dodgy. I may spin off into tangents. Some of my thoughts may come off half baked. I assure you this is normal :). 

I may go back and clean up entries for gross grammatical issues but other than that what you see is what you get. My impressions, mostly unfiltered. Hope you enjoy the series :).

Tuesday, April 2, 2019

The Second Arrow: an #STPCon Live-ish Blog Entry

Yesterday was the start of the workshops day at STP Con and I was happy to present for the second time "How to Build a Testing Framework From Scratch". It's done. I've had a chance to sleep on it after being emotionally spent from giving it. Now I can chat a bit about the experience and some lessons learned.

First, I was able to deliver the entire presentation in three hours, which blows my mind.

Second, I think the time spent talking about the reasoning behind why we might do certain things is every bit as important as the actual technical details.

Third, I've come to realize that there is an odd excitement/dread mix when presenting. Many people say that they are most nervous the first time they present a talk or presentation. I've decided I'm more nervous the second time I present something. The first time I may get through on beginner's luck or if I do take an arrow in the process (meaning I realize areas I messed up or could do better) that's in the moment and it's experienced, processed and put away for future reflection.

I use the term "arrow" specifically due to an old podcast where Merlin Mann represented this idea. Someone in battle feels the first arrow that hits them. It hurts, but it doesn't hurt nearly as much as the second arrow. The reason? The first arrow hits us by surprise. The second arrow we know is coming. It's the same impact but because I've been there and done that, I am often frustrated when the efforts to mitigate the issues I dealt with the first time aren't mitigated or something else I hadn't considered happens.

Much of this came down to making materials available to people in a way that was useful and timely. As I talked to a number of participants, we realized we had several similar problems:

- the materials were made available in advance but some people waited until the night before at the hotel to download them and discovered the hotel bandwidth couldn't handle it.

- the flash drive I handed off (though I did my best to make sure it was read/write on as many machines as possible) ended up as read-only on some machines. Thus it meant copying everything over to bring up the environment, which took close to a half hour for many people.

- even with all of this, I still managed to have to hear (more times than I wanted to), "sorry, my Hyper-V manager is set up by my company. I can't mount the flash drive or open the files". Ugh! On the "bright side" that was a situation that I couldn't control for or do anything about even if everything else worked flawlessly. Still, it was frustrating to have to tell so many people to buddy up with someone who could install everything.

So what did I learn taking my second arrow with this presentation?

1. The immediate install party will only ever happen if everyone in advance confirms that they are up and running well before the event. While the flash drives certainly help, they don't provide that large a time savings as compared to just having everyone set up when they walk in.

2. The "set up" and "rationale" part of my talk... since it's a workshop, what I should be doing (I think), is getting into the nuts and bolts immediately, and sharing rationale around each part of the process as we are getting into it. As it was, my introductory material took about 40 minutes to get through before we fired up the IDE and explored the framework itself. That's too long. Granted, it's there so that people can get everything installed but I think I can pace it better going forward.

3. Though the framework I offer is bare bones, I think I can comment better in the examples and should have some before and after examples that use different aspects and let people see them as a natural progression. Perhaps have three maven projects, each being a further progression from the last one.

Don't get me wrong, I had a blast giving this workshop and I hope the participants likewise enjoyed it. Still, I hope I can make it better going forward and here's hoping I'll get another chance to present it at another conference and hopefully not end up taking the third arrow ;).

Thursday, March 28, 2019

Inclusive Meta Paradox Frameworks: A Little Shameless Self Promotion

I realize I'm terrible at promoting myself and the things that I'm doing. Having said that, I do want to encourage everyone to see what I'm up to and with that, I'm sharing a podcast I recorded with Mark Tomlinson for STPRadio.

Listen to "STPCON Spring 2019 Michael Larsen on Inclusive Meta Paradox Frameworks" on Spreaker.

You know the old saying "When the going gets weird, the weird turn pro?" Well, if you don't you do now :). Seriously, I love this title. Thank you, Mark, this is great.

Also, for those of you who are intimately familiar with my editing style on "The Testing Show", you may think that I am always smooth and flawless in my delivery, without any wasted breaths. Yep, it's true, no one on The Testing Show breathes... kidding, but now that I've planted that little seed in your head, I'll be the next time you listen to an episode you'll be subconsciously dwelling on that ;). My point is, Mark keeps it real and whatever was said as it was said is there in real time, so if you are curious as to how I really sound when I'm interviewed, here's your chance.

In this podcast, I talk about my workshop around "building a framework from scratch" (and yes after I finish this presentation I am going to start unpacking it and posting it here) as well as my talk on Accessibility and Inclusive Design and how they can be used to help Future Proof software.

If you will be at STPCon and you will be in my presentations, here's a taste of what to expect. If not, well, you get that anyway just by listening. Have fun and if you like the podcast, tell a friend about it, please.

Thursday, April 12, 2018

Talking About Talking - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

Any time I attend a conference, I tend to go with 70% new content and about 30% familiar speakers. Over time, I've found it harder to look for new people because many of the people I get to know get asked to repeat present at conferences. With that out of the way, I consider Damian Synadinos a friend, but I picked his discussion for a specific reason. While I think he is intending for this to be about public speaking, I'm looking to see how I can leverage public speaking focus on my own active company interactions.

Why do I speak at conferences, at meetups, or at events? There are a variety of reasons but if I have to be 100% honest, there are two reasons. The first is wholly professional. I want to develop credibility. I want to show that I walk the talk as well as that I know at least an aspect of something. The second is personal and depending on how well you know me, this is either a revelation or so obvious it's ridiculous. I'm an ex-"RockStar". I used to spend a lot of time entertaining people as a singer and I loved doing it. I get a similar rush from public speaking, especially when people say they actually enjoy what I'm talking about and how I deliver the messages I prepare.

Part of talking in public is the fact that you are putting your ideas out there for others to consider. That can be scary. We own our ideas and our insecurities. as long as we keep them to ourselves, we can't be ridiculed for them. In short, people can't laugh at us for the things we don't put out there. Personally, I think this is a false premise. I have had people laugh at what I presented, but not because what I was saying was foolish or made me look dumb. Instead, it was because what I was talking about was in and of itself funny (actually, more absurd) but people laughed because they could relate. To date, I have not been laughed at. Laughed with, lots of times. If you are afraid people will laugh *at you*, let me reassure you, it's hugely unlikely that will happen.

The biggest reason why I encourage getting out there and speaking is that our ideas deserve to be challenged and we should want to challenge our ideas. Also, we may never aspire to get on a stage and speak, but all of us participate in meetings or presentations at work, in some form or another. By getting out there and speaking, we can improve our ability to function in these meetings. 

Something else to consider for a reason to give a talk or speak in public is what I call the "ignorance cure" for a topic. It's wise to talk about stuff we know about, but once a year or so, I will deliberately pick something I don't know much about or that I could definitely know more about. When I do this, I try to pick a timeline that gives me several months so that I can learn about it in a deeper way. People's mileage may vary with this, but I see a definite benefit from doing this.

Not every talk idea is going to be amazing. Not every talk idea is going to be some revolutionary idea. Truth be told, I'm lousy at revolutionary things. I'm highly unlikely to be creating the next big anything. However, I am really good at being a second banana to take an idea someone else has and running with it. Don't be afraid that something you want to talk about isn't new. We aren't born with ideas, and most of the time, we stand on the shoulders of giants.

My recommendation to anyone who has any interest in public speaking, no matter how small, is to borrow from Morrisey... "Sing Your Life". Talk about your experiences, as they will always be true and real. You may not be an expert on a topic, but you are absolutely an expert on *your experiences* with that topic. Also, if anyone wants to get up and talk, let me know. I'd be happy to help :).

Release is a Risky Business - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

Good morning. Let me get this out of the way now. I'm going to be splitting my mental energy between attending STP-CON and the fact that a new candidate release dropped last night and I need to see how close to shippable this one will be. Splitting my brain is an everyday occurrence, but that may mean I might miss a session or two, but I'm not missing this first one ;).

Simon Stewart is probably well known by those who read my blog regularly. Web Driver guy and a bunch more.  We're talking about the changes and the way that "release" has morphed into faster releases, along with a greater push to automation. Some stuff that fell by the wayside in that change is honestly a lot of stuff that I don't miss. There's a lot of busywork that I am glad has been taken over by a CI/CD pipeline.

Outside of software testing, my professional life has become very closely knit into release. Jenkins is my baby. It's an adopted child, maybe a foster child, but it's still my baby. As such, I tend to spend a lot of time fretting over my "problem child", but when it works it is quite nice. Remind me when I am over the PTSD of the past month just how sideways CI/CD can go, but needless to say, when a tester takes over release management, they go from almost invisible to ever present.

Release is risky, I can appreciate that greatly, especially when we get closer to a proper release. Though I work in an environment where by necessity we release roughly quarterly, in our development and staging environment, we aim to be much "Agiler" and closer to an actual continuous environment (And yes, I checked "Agiler" is a real word ;) ).

Simon points out, and quite rightly, that release is really less about quality and more about risk mitigation. For that matter, testing is less about quality than it is risk mitigation. For that matter, staging environments do not really give you any level of security. Simon makes the point that staging environments are a convenient fiction but they are a fiction. My experiences confirm this. About the only thing a staging environment tells you is if your feature changes play well with others. Beyond that, most staging environments bear little to no resemblance to a proper production environment. If you think Simon is encouraging releasing and testing in production, you would be correct. Before you have your heart attack, it's not the idea of a massive release and a big push of a lot of stuff into production and all bets are off. If you are going to be doing frequent releases and testing in production, you have to think small, get super granular and minimize the odds of a push being catastrophic. Observability and monitoring help make that possible.

There's a lot that can go wrong with a release and there's a lot that can go right with it, too. By accepting the risk and doing all you can to mitigate those risks, you can make it a little less scary.


Wednesday, April 11, 2018

The Use and Abuse of Selenium - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

I realized that the last time I heard Simon speak was at Seleimum conf in San Francisco in 2011. I've followed him on Twitter since then, so I feel I'm pretty well versed with what he's been up to, but the title intrigued me so much, I knew I had to be here.

Selenium has come a long way since I first set my hands on it back in 2007.  During that time, I've become somewhat familiar with a few implementations and bringing it up in a variety of envirnments. I've reviewed several books on the tools and I've often wondered why I do what I do and if what I do with it makes any sense whatsoever.

Simon is explaining how a lot of environments are set up:

test <-> selenium server <-> grid <-> driver executable <-> browser 

The model itself is reasonable but scaling it can be fraught with disappointment. More times than not, though, how we do it is often the reason it's fraught with disappointment.  A few interesting tangents spawned here, but basically, I heard "Zalenium is a neat fork that works well with Docker" and I now know what I will be researching tonight after the Expo Reception when I get back to my evening accommodations.

Don't put your entire testing strategy in Selenium! Hmmm... I don't think we're quite that guilty, but I'll dare say we are close. Test the happy path. Test your application's actual implementation of its core workflows.

Avoid "Nero" testing: what's Nero testing? It's running EVERYTHING, ALL THE TIME. ALL THE TESTS ON ALL THE BROWSERS IN ALL THE CONFIGURATIONS! Simon says "stop it!" Yeah, I had to say that. Sorry, not sorry ;).,

Beware of grotty data setup: First of all, I haven't heard that word since George Harrison in "A Hard Day's Night" so I love this comment already, but basically it comes down to being verbose about your variables, having data that is relevant to your test, and keeping things generally clean. Need an admin user? Great, put it in your data store. DO NOT automate the UI to create an Admin user!

Part of me is laughing because it's funny but part of me is laughing because I recognize so many things Simon is talking about and how easy it is to fall into these traps. I'm a little ashamed, to be honest, but I'm also comforted in realizing I'm not alone ;).

Modern Testing Teams and Strategies - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

One of the fun part of being a "recidivist conferencist" is that we get to develop friendships and familiarity with speakers we get to see over several years. Mark Tomlinson and I share a ridiculous amount of history both in the testing world and personal endeavors, ao I always enjoy seeing what he is up to and what he will talk about at any given event. This go-around, it's "Testing Teams and Strategies" so here we go...

Does it seem common that the people who decide what you do and how you do it have NO IDEA what it is you actively do? I'm fortunate that that is not so much the issue today but I have definitely lived that reality in the past. It's annoying, to be sure, but often it comes down to the fact that we allow ourselves to be pigeonholed. The rate of change is insane and too often we think that we are being thrown into the deep end without a say if what happens to us. If we don't take some initiative, that will continue to happen to us.

I've had the opportunity over the past (almost) three decades to work in small teams, big teams, distributed teams, solo, and freelance. Still, in most of my experiences, I've been in what I call part of "the other" organization. That's because I've worked almost exclusively as a tester in those three decades (my combined time as a cable monkey, administrator and support engineer equals up to less than four total years and even in those capacities I did a lot of testing). Point being, I've spent more time as part of another organization that has been siloed. It's a relatively new development that I'm working on a team that's both small enough and focused enough where I'm actually embedded in the development team now. As a point of comparison, my entire development team is seven people; three programmers, three testers, and one manager. Really, that's our entire engineering team. That means that there is too much work and not enough people for anyone to be siloed. We all have to work together and in reality, we do. My role as a Tester has dramatically modified and the things I do that fall outside of the traditional testing role is growing every day.

if I had to put a name on our type of team, I'd probably have to describe us as a blended group of "Ronin", meaning we are a relatively fluid lot with a host of experiences and we are ultimately "masterless". If something needs a champion, it's not uncommon for any of us to just step up and do what's needed. The funny part is Mark just put up the "non-team testing team" and basically just defined what I just wrote. Ha!!!

OK, so teams can be fluid, that's cool.  So how do we execute? Meaning, what is the strategy? To be clear, a strategy means we deliver a long-term return on investment, align resources to deliver,  arrange type and timing of tactics and make sure that we can be consistent with our efforts. Ultimately, we need a clear objective as well as a strategy to accomplish the objective. Sounds simple but actually being concrete with objectives and developing a clear method of accomplishing them is often anything but. In my opinion, to be able to execute to a strategy, we have to know what we can accomplish and what we need to improve on or develop a skill for. Therefore a skills analysis is critical as a first step. From there, we need to see how those skills come into play with our everyday activities and apply them to make sure that we can execute our strategy with what we have and develop what we need to so that we can execute in the future.


(still editing, refresh to see the latest :) )

More than That - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

I had to step out and take a meeting so I missed probably half of Damian Synadinos' talk. Therefore, if this feels incomplete and rambling, well, that's because it literally is ;).

I am intimately familiar with being asked "what I do" as well as "who I am". The fact is, I am a lot of people. No, I don't mean in a schizophrenic sense (though that's debatable at times). I mean it in the Walt Whitman sense:

"Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes."

The point is that we are never just one thing. All of our quirks, imperfections, and contradictions come from our many experiences, histories and active pursuits.

Depending on who you talk to about me, you might get a wildly interesting view of exactly who I am. It might get really interesting depending on what period of my life you ask about, but if I had to guess, these identities might show up:

actor
bass player
bodybuilder
boy scout leader
carpenter
cosplayer
dancer
drummer
father
fish geek
gardener
guitar player
husband
mandolinist
mormon
obsessive music fan
otaku
photographer
pirate
podcast producer
poet
programmer
prose writer
singer
snowboarder
tester
video gamer
yogi

If I had to choose a specific attribute, I'm going to lay claim to "eclectic".

Each of these has informed my life in a variety of ways, and each of them has given me skills, interests and a number of very interesting people to do this variety of things with. In many ways, it's the people that I interacted with that informed how long or how little time any of these endeavors/attributes have been a part of my life, but all of them are and all of them have provided me skills to do the things I do.

Also, if any of the items on that list have you wondering what they are or how I'm either actively involved in or why I chose to mention them, please be my guest :).

Performance Test Analysis & Reporting - a 1 1/2 armed #LiveBlog from #STPCON Spring 2018

One of the factors of performance testing that I find challenging is going through and actually making sense of the performance issues that we face. It's one thing to run the tests. It's another to get the results and aggregate them. It's still another to coherently discuss what we are actually looking at and how they are relevant.

Mais Tawfik Ashkar makes the case that Performance Analysis is successful when people actually:

  • read the results
  • understand the findings
  • can be engaged and most important 
  • understand the context in which these results are important


Also, what can we do with this information? What's next?

Things we need to consider when we are testing and reporting, to be more effective would be:

What is the objective? Why does this performance test matter?
What determines our Pass/Fail criteria? Are we clear on what it is?
Who is on the team I'm interacting with? Developers? BA? Management? All of the Above?
What level of reporting is needed? Does the reporting need to be different for a different audience (generic answer: yes ;) )

What happens if we don't consider these? Any or all of the following:


  • Reports being disregarded/mistrusted
  • Misrepresentation of findings
  • Wrong assumptions
  • Confusion/Frustration of Stakeholders
  • Raising more questions than providing answers

Mais starts with an Analysis Methodology.  Are my metrics meaningful? Tests pass or fail. Great. Why? Is the application functioning properly when under load/stress? How do I determine what "properly" actually means? What are the agreements we have with our customers? What are their expectations? Do we actually understand them, or do we just think we do?

By providing answers to each of these questions, we can ensure that our focus is in the right place and that we are able to confirm the "red flags" that we are seeing actually are red flags in the appropriate context.


Tester or Data Scientist - a 1 1/2 armed #LiveBlog from #STPCON

Smita Mishra is covering the topic of  "Tester and Data Scientist". Software Testing and Data Science actually have a fair amount of overlap. Yes, there is a level of testing in big data but that's not the same thing.

A data scientist, at the simplest level, is someone who looks through and tries to interpret information gathered to help someone make decisions.

Website data can tell us what features people engage with, what articles they enjoy reading and by extension, might help us make decisions as to what to do next based on that information.

An example can be seen on Amazon. About 40% of purchases are made based on user recommendations. the Data Scientist would be involved with helping determine that statistic as well as its validity.

Taking into consideration the broad array of places that data comes from is important. Large parallel systems, databases of databases, distributed cloud system implementations, aggregation tools, all of these will help us collect the data. The next step, of course, is to try to get this information into a format to be analyzed and for us (as Data Scientist wannabes) to synthesize that data into a narrative that is meaningful. I find the latter to be the much more interesting area and for me, that's the area that I'm most interested in learning more about. Of course, there needs to be a way to gather information and pull it down in a reliable and repeatable manner. The tools and the tech are a good way to get to the "what" of data aggregation. Interacting with the "why" is the more interesting (to me) but more nebulous aspect.

So what do I need to know to be a Data Scientist?

  • Scientific Method is super helpful.
  • Math. Definitely, know Math.
  • Python and R both have large libraries specific to data science.
  • A real understanding of statistics.
  • Machine Learning and the techniques used in the process. Get ready for some Buzzword Bingo. Understanding the broad areas is most important to get started.

Recommended site: Information is Beautiful

The key takeaway is that, if you are a tester, you already have many of the core skills to be a Data Scientist. Stay Curious :).