Wednesday, November 11, 2015
Early Morning Enlightenment - Live from #AgileTD
It's another full day ahead for us, including me talking in a couple of hours in the "big room". I'm both excited and anxious, but I'm grateful that I m covering a topic I've been actively engaged in for quite some time, and that I am covering a topic that Albert Gareev and I have been actively working on the better part of a year.
It's been a year of learning and experimenting, and I am hoping the details I am providing about Inclusive Design will prove to be interesting and possibly game changing for some. For obvious reasons, I will not be live blogging my own session today, but I hope to have a recap about the experience a bit later in the day.
After a fun Wild West themed party last night, some die hards are in bright and early to do Lean Coffee once again. It does often astound me how many people show up for these early morning events, especially after a late nights revelry. Work hard, play hard, I guess ;).
The range of topics as always is wide and varied, and based on the top votes, we started out with the topic of "Relevant Test Metrics in Agile". It's frequently a push and pull in the Agile space that we don't over emphasize metrics, but the fact is, measurement happens in organizations. We need some way to determine if we are succeeding or failing, or how long it is taking us to get stories created, tested and out in the wild. One challenge that we have when it comes to working with areas such as TDD, ATDD, BDD, etc., is that the number of bugs is often nebulous. In my own experience, since I work in a Kanban shop, bugs found in the process of development are reworked inside the story. We find a number of things, we fix a lot of things, but the number of "bugs" in most of these cases is zero, because the work is done as the "product" moves down the line. In our world view, bugs are what happen after that feature "ships", whether they be found on our staging server or in production. We definitely keep track of and focus on those, since they are genuine bugs and issues that made their way through testing. For us, this is relevant because it gives us insights on areas that we can provide greater focus during testing. Other organizations look at things like the financial cost to the business for delays or for issues that make it out in the wild. I find that intriguing and wonder how that would work, but it certainly "bottom lines" the approach.
"No UI Tests" looks at the dichotomy of the value of automated UI tests (or supposed lack thereof) and the fact that a lot of tests do have to happen within the UI. Part of the challenge is the fact that, traditionally, UI tests and running them from the front end are more brittle than similar unit tests that look at atomic functions and procedures. Still, the UI is the interface that most users will be using, so those tests still have a value and are important to run. Getting a balance between writing tests that focus on unit testing, integration testing and a few key tests that are built around the UI is important. Having a Page Model approach helps a bit, and there is a definite value to making sure that surface level changes don't introduce problems in the testing and create an undue burden on the test maintainers. Tests that need to be frequently reworked are going to be problematic. Some rework is, of course to be expected.
"Building and Sustaining Internal Communities in Organizations" is another topic that received a lot of votes, and this is going to depend considerably on the size and culture of a company. In my case, the community is relatively small. At present, we have three software testers and nine programmers, so we are pretty solidly integrated. Larger companies I would imagine have a harder time with integrating and talking with one another. One way to encourage this level of discussion was to cover a particular topic, or to do some kind of tester games, and make a way to make it applicable to the broader group. By doing that, they were able to get more of the testers together to share ideas and develop comraderie within the various teams.
"What do you do Against Agile KPI's"... put simply, Key Performance Indicators are going to vary dramatically from company to company. Each organization is going to determine which aspects matter to them, and many organizations look to fish out "metrics" to answer this. Positive, they are values that are objectively measured. negative, metrics are almost never objective. They are filtered through the audience and the people that are interpreting them. A question to ask is "What caused the organization to look to Agile as an approach in the first place? Generally, the goal is to be more efficient and more responsive, to create better software out the gate, but ultimately, the goal for most organizations is to do well, and to keep themselves viable. That means that the Excel needs to line up at the end of this, so it's key that any and all performance indicators ultimately support the organization, and there's really no way to totally escape that.
All right, that's a good start, heading into the main room and getting ready to get day two underway, including my own talk. See you all soon :).