Wednesday, November 6, 2013

Hej från Sverige: Somewhat Live, and sort of awake, at Øredev 2013

Hello everyone, and welcome to another stream of consciousness from your truly. If I seem a little bit more stream of consciousness than usual, please understand, it's 11:30 p.m. on my biological clock. I'm trying to overcome a nine hour time differential. I wasn't feeling it at all yesteday... I'm really feeling it now :).

To start this off, I want to say thanks very much to Maria Kedemo, since she is, first and foremost, the reason I am here in Malmø, Sweden. Maria is the one who extended the invitation for me to speak, so my giddy, bleary and somewhat sleep deprived state of mind is because of her, and I am genuinely grateful :).


Denise Jacobs starts us off with a talk about the Creative Revolution, which for a conference dedicated to The arts, seems to make perfect sense :). She starts by describing a tiny coral polyp and declares that this little piece should well be called "Darwin's Wing Man" since had it not been for this polyp (and we should add quite a few polyp siblings, there would not be a galapagos Islands for darwin to discover and contemplate the Origin of the Species. Denise is one who considers herself a "Creative Evangelist", and the interesting thing is that "creativity" is highly valued in the attributes we consider important in others, but more than that, it is a huge element in transformation. For something we consider so valuable... how can we more effectively use it?  The first thing is that we need to focus on creativity not just for ourselves, but in groups as well. "Creativity is all about making connections and seeing patterns."

Denise makes the point to say that she is talking about a (R)evolution, which should not be confused with "Revolution". With this, we come back to our friend the coral polyp. That polyp is important, but it cannot do it al by itself. It needs others, like itself and with their own genetic structure, to come together to make a coral reef. Sorry I can't give a visual of this, but an entire Swedish audience just did the wave (I kid you not ;) ). Denise used this visual to show that everyone together made this really cool thing happen.


One of the problems that hampers most of us is  that we tend to be too over-aggressive and optimistic with what we want to do. We want to do *BIG* things, but very often, it's a collection of tiny things that get us to the big things. We are even more likely to do something if we make it positive, and even more effective if we make it present (as in present tense). Anchor to a current habit if you want to make a change stick. More to the point, congratulate yourself when you do it, your dopamine receptors will thank you for it ;).


One of the killers of our creativity is that we get talked out of who we are and what we do. We hide our brilliance and our spark because we don't like being nails that stick up (we all know what happens to the nail that sticks up, right? It gets pounded down). When we over emphasize working on our weaknesses, we tend to have very minor improvements i n those areas (and we tend to resent the improvement). If we focus on our strengths, and really leverage them, we can really kick start amazing transformations. If you really want to see this in action, hang around some young kids, as they will be perfect examples of this. Kids don't stress on what they are not good at, they jump into things with what they genuinely love doing. We should do likewise.

We were asked to talk with our chair mates to discuss one of our new topics that we can make talks about. I enjoyed talking with Bjorn Granvik and his ideas about "Joy in Diagrams", which soulds kind of brilliant, to be totally honest. Me? I shared Ministry of Testing stickers with my chair mates and told them about the value and beauty of diverse geographic communities and how we are raising the visibility of software testing. Here's hoping we get a chance to present on those items :).

Creativity is supra-linier; the more we interact, the greater the possibility of creativity. We need to escape our silos, whenever possible, so that we can break out of "sticking with what we know". Ceativity is kick started when we have a broader group and more people to interact with. Getting people together is easy. Doing something with them being together is the harder part. because we tend to tear each other down. Instead, let's try to see if we can "plus" the ideas and opportunities that come our way. There's no guarantee that we will do something awesome with everything, but start where we are, and let's see if we can "let's and" rather than, yeah, but". Oh and for Denise... the plural of "ethos" is "ethe" or "ethea" ;).

---

Next up is Woody Zuill, and the idea behind "No Estimates".  This is tethered in the world of Agile, so Woody said that, if anyone was working in a traditional development environment, there might not be a lot of hope. I know from my own experiences that most estimates are ranging anywhere from ineffective to completely unreasonable. The problem is that we are looking for certainty in places where certainty really doesn't exist.


Time, cost, potential revenue, these areas certainly make sense for us to consider while we look at creating new products and services, but as Woody said, if he was actually good at doing estimates, he wouldn't be making software, he'd be getting rich betting on sporting events (heh, interesting point ;) ).  Estimates are educated (or not) guesses about work we do not understand. Seems that understanding is needed before the estimate, right? How many of the estimates we make turn out to be completely wrong? Do we make bad decisions based on what we think we want, or do we start to get defensive as soon as we make our decisions?


Woody is suggesting we do something un-natural. How about, instead of making estimates, making something that we can quickly put into use and evaluate? How about if we prove value in small chunks, rather than trying to take large pieces of functionality and finding ourselves endlessly surprised at what we end up implementing that doesn't hit the mark? Estimates are used to attempt to predict the future. Add to that "the only sure thing about forecasts is that they are (often) wrong. Unknown multiplied by unknown is... I think you get the point ;). Wouldn't it be great to call Estimates what they really are, which is "WAGs" (Wild Assed Guesses)? If we were to be more vocal about calling them that, we might be able to get some managerial traction on understanding the futility of estimates.


Agile is about discovery and steering. Napoleon Bonaparte was quoted as saying "One jumps into the the fray, then figures out what to do next". It worked for him most of the time, with the exception of that whole Russian Winter thing or Waterloo, but otherwise, he really did have a heck of a run!


"But my customers demand estimates" Really? Do they demand estimates because they want/need them, or because they've been burned too often to believe that we can actually deliver? What if we were to give them a direction, high level, without trying to fake and add minutiae that isn't real world?


 jIf we want to work without estimates, we need to do real work. Focus on delivering quickly, in small batches, and get a gauge as to what it actually is doing. Sounds a lot like Agile, doesn't it ;)?


---
Anne-Marie Charrett is talking about “Curiosity Killed the Cat, but What Kills Curiosity?” and is about a less than stellar consulting gig she took part in. Testing is questioning, and when nwe stop questioning, or when we stop being engaged and providing valid and valuable information, we are in trouble.


Anne-Marie is proud of her role as a coach, and in her view, one of the primary roles of a coach is to be dispensable. She doesn’t want to have to be there forever. She wants to help the teams work better, and then move on from there to work with others. This particular company had three teams, a mix of disciplines, and a handful of dedicated testers for each project. The key focus was to help bolster the testers and improve the testers capabilities.

After talking to the testers, some pictures emerged. The testers felt they were isolated, they had no one to help them, they had excessive points to test, i.e. the hidden work that appears. They were able to make some wins, testers worked together to help solve problems, they did work on cross knowledge, they added coaching to help facilitate exploratory testing, and they revised their automated testing strategy. There was also some wins with regard to quality (what done means, agreement on Quality, checking vs. testing, etc.).

This doesn’t mean everything went well… there was little in the way of stand-ups or communication, Cliques abounded, bugs were not getting fixed, and stories were either under defined or ill defined. Add to that the company owner was s into “Lean startup” that it seemed their trajectory was being changed all the time. Testers were afraid to speak up, afraid to question stories, and afraid to question developers decisions. Anne-Marie said that there was a point she realized that she had made a number of assumptions that, when looked at in hindsight, stood in the way of her being successful. Anne-marie saw that what was the root problem was a loss of curiosity. What kills curiosity?

Being Intimidated
Lack of Understanding
Lack of Authority, Clarity or Trust
Lack of Knowledge or Culture
You can’t kill curiosity if it’s already dead.

Curiosity is a weird thing, it’s hard to spark it when things are making it difficult to be creative and interested. Review your goals regularly and see how the culture of the company might impact your overall strategy.

--


The next section was “Practical Tools for Playing Well With Others” with J.P. Rainsberger.  J.P. stated with a short film about the fact that people are made of meat (the protagonists are aliens observing humans who live on Earth and the way we interact with one another).


In a diagram about debugging a conversation, we start with Intake. In short, what do I see or hear? Conversations go sideways when we are trying to interpret something and they get a different interpretation than what they thought  they heard (or wanted to hear) Meaning follows on.  If we say a statement in English, for some, translating to another language may lose key elements from the original statement. The significance of a message may vary (why do we care about this?). Finally, there is a response, which starts the reverse process again (Intake, Meaning, and Significance). 

If we have an intake problem, we call it a “misinterpretation”, when we have  Meaning Problem, then we have a “misunderstanding”. When we miss the significance, we may have a “misinterpretation” , and then there’s the really difficult area. the path from “misinterpretation” to “response”.  Conditioning, culture, System 1 or Lizard Brain thinking often come into play when we try to mitigate or control significance or response. This was a cool reminder that, often, we think there are culture or personality differences, when in truth, anyone “can have a failure to communicate”. This was an interesting breakdown and reminder about how we get into this situation, that we intercommunicate all the time. Some key takeaways: Think of three ways to interpret what just happened. Ask “what did you intend by that?”, communicate in “E-Prime” to reduce judgment perceptions. Warn the other person when you need to say something uncomfortable.

---

My apologies to the Lightning Talk presenters, but I was running on fumes by the end of J.P.’s talk, and I had to get some quick shut-eye (blessing of the venue being immediately next door to the hotel means I could go and do exactly that).  I have come back now for the afternoon sessions (including my own) and Scott Barber is next up with “Lifecycle Integration of Performance, Simply and Creatively”. Scott starts out with the idea that different users and different roles, performance means different things. Profiling, code design, load testing, production monitoring, etc. all play a hand in these expectations. Load testing, stress testing, etc. are all components of performance. Using a simple math analogy, performance is to load as rectangle is to square. 

The performance lifecycle needs to run from “conception to headstone” as Scott puts it, and likewise puts a spin on the cradle to grave metaphor (meaning it goes further back and further forward). Whenever an idea for an application is conceived, performance needs to be considered. 


An example that Scott discussed was with the Victoria’s Secret fashion show that was live streamed for the first time in the early 2000’s. Research showed some very different profile information than what they anticipated, with a rather large guy audience, and a spike that came later in the broadcast. They did really good research of their target market, but missed the ball when they were talking about the total potential audience. The performance hit was such that they have determined to never be such “Internet Pioneers” again, though the live streaming fashion show does keep going. The healthcare.gov rollout in the U.S. is a current iteration of this. The performance issues with the site is doing more than just being an embarrassment on a technical side, but it’s coloring public opinion for the entire program. 


To prevent poor performance, you have to check regularly and et a feel for performance perspectives as they are developing. We can’t just react when it happens. We have to look for problems first. Ultimately, everyone is responsible for performance, but ultimately, the past has indicated that it is left to the end of the lifecycle way too often. Ideally, it would be baked into every sprint, every story, every iteration. Typically, we look at performance as a sum of Software Performance Engineering + Profiling + Load Testing + Capacity Planning + Application performance Management == Inefficient and Ugly Delivery and Maintenance. Consider the cycle of Target, Test, Trend and Tune. The whole team needs to be invested in the performance of the product. Prevent poor performance with a little work, every day from everyone.

---

James Bach is riffing on testability and what exactly that means. Chuckles abounded when he asked the audience if they were still doing "manual programming", but I think he got the point across. Programmers do have automated programming; it'c called compiling, linking and building. They realized that they were different actions from programming. Hopefully, there will be some dispersion of the idea that manual testing and automated "checking" are different things. Testing, as James puts it, is "learning by experimenting, including study, questioning, modeling, etc.


James talked about the "Risk Gap"; what we know, and what we need to know. Testing helps us close that gap. The larger the gap, the harder it is to test. We don't need to drive with our headlights on during the day (running lights on my car notwithstanding, i.e. I can't turn them off :p ). However, in the dark and the fog, lights are vital. Need a big term for the Risk Gap? James calls it Epistemic Testability.


Testing is all about control and observe. Products have both general and custom components. The general components are tried and tested. The custom (new) components, probably not. There are explicit and implicit requirements. The higher the implicit requirements, the more challenging the testing process will naturally be. Project related testability, Intrinsic testability, Subjective Testability, Value Related Testability and Epistemic testability all ned to be covered to have a chance of effective testing to occur.

James just planted an amazing mental image in my head. As he was talking about so many people who look at software testing as being easy because they use tools such as Selenium and Cucumber. But what about the things you can't test with those tools? Huh, what could that be? That's then James started pantomiming a Disney song, with chimney sweeps and talking, singing mice and a spoonful of testing... go ahead, just let that image run wild in your brain for a minute. You can thank or curse me later ;).

Excellent testing is just like science. Repeat that with me, software testing is *just8 like actual scientific method, with one exception. Scientist only get one build, software testers have to deal with regression. therefore, we have to be able to observe and control as much as possible. In order to test well, we need to be able to control the execution to visit each important state, see everything important and control the variables that might influence it (plug made for the game Sporkle, and yes, I plan to play it later :) ). Logging is huge. If you test and don't analyze log files, you're missing a treasure trove of information that can inform your testing. Use fact checking to back up your creative experimenting.

James used a cool example of a simple random number generator. If we generate lots of runs and plot those runs, and we sort the output, we see that the result is not a straight line. the fact that there is a curve in the line shows that the output over time is not truly random. Some additional issues were shown in the fact that the actual random generator math makes it so that two numbers never appear (999 and 1000). Cool and fun example :).

Next up is... ME :). Tweeters who are here and who attend my session, I'd love it if you can tweet during my session so I can storify them. In any event, I've got to go and get fitted and teched out. Wish me luck :)!!!


More to come... stay tuned :)!!!

No comments: