Thursday, October 15, 2020

Get the Funkify Out: A Neat Accessibility Tool/Disability Simulator

 Are you all sick of me yet? Wow, that was a lot of writing/typing/conferring this week. to be honest, I've really missed it. I was happy to participate in PNSQC this year even in the unusual circumstances and challenging technical issues we went through. I will talk more about that in another post and also in the next The Testing Show podcast but for now, I want to share something a little new for me and maybe new for a lot of you all, too.

While I was developing my "Add Some Accessibility To Your Day" workshop, I reviewed the tools that I use regularly and looked to see if there was anything interesting out there I hadn't played with recently. Many of you know my general toolkit:


  • WAVE Browser Plugin
  • AXE Browser Plugin 
  • Accessibility Developer Tools
  • VoiceOver and NVDA Applications (MacOS and Windows 10)
  • NCSU Color Contrast Analyser
  • Hemingway Editor (yes, it is an Accessibility tool. FIGHT ME ;) )
I of course discussed these but I also found a newer tool that definitely interested me and that I've been having fun working with. That tool is called Funkify.



Funkify is a Chrome Extension and it is a little different than the tools mentioned above in that this doesn't really call out errors or find bugs... at least not in the traditional sense. This is a simulator that puts you in the driver's seat as any number of people with disabilities so you can experience your site through their eyes/ears/hands.

Funkify modifies your site so that you can see it as these personas see it. You also have the ability to create your own personas based on criteria that you deem important and you can adjust the level of challenge/disability.

For example, let's look at the W3C Before and After site.


How might this site look to someone with dyslexia? Funkify can give us an idea. Just press the toolbar and select the defined persona (in this case "Dyslexia Dani") and expand to see the options.

You can adjust the amount of jitter/scramble of the letters. Make it mild or severe.

Onvce you've dialed in the level you are interested in, let it run until you are satisfied (or dismayed, or annoyed, take your pick):




There are a variety of disabilities that can be simulated: color blindness, astigmatism, jittery hands, high distraction, macular degeneration, or you can create your own agent/persona with the criteria and level of disability that you are interested in simulating.




Give it a name and a description and save it. It's ready when you are.



This is not too far from my own visual situation. In fact, this looks very much like the page without my reading glasses.

In any event, if you want to have a simulator that will put you in the shoes of a variety of users, specifically those with disabilities, Funkify is worth a look. to be clear, the free version is limited, if you want to have all the options you will need to pay for the Premium version. still, if you want to have the opportunity to see what your site looks like in a variety of Accessibility scenarios, this might be just what yo uare looking for.



Wednesday, October 14, 2020

PNSQC 2020 Live Blog: Diversity in Testing and Technology: A Moderated Panel

 


Wow, three days go by fast when you are typing like mad. This is the last formal talk before my workshop and after my workshop, the conference will be over.






Tariq King is moderating this panel discussing the diversity in testing and technology and he's leading with "is it important if people aren't doing anything to promote it?"

Lisette has built several teams and she says that the teams need to match the users. If the users are diverse, then the teams need to likewise be diverse. Tariq also commented that diversity additionally means that designing for our users is often a way to drive revenues. 

Raj pointed out that the world of work has changed, with teams spread out all over geographical areas so diversity is of course important because diversity exists somewhat by default, even if it's not intentional. even with that reality, there is still a lot of work that needs to be done to more effectively represent our customers and to have their involvement in the engineering and testing process. 

Hussain In the individual contributor roles, there may be an overrepresentation of some groups but that definitely is not the case in management. Management skews heavily white and male. In ways, Education requirements can be a barrier, so google as an example is looking to offer a certificate program that, if implemented, will count the same as a degree. Rebecca mentioned that while this is a step, the question is how will that create new pathways up the career ladder.

Lisette points out that the individual contributor pipeline is relatively diverse but the people hiring are not diverse. When companies promote and help diverse candidates grow, they can become leaders in their organizations and become decision-makers for their companies. 

Raj points out that often, mandates are made to say that "x number of people need to be diverse" but there's little emphasis beyond the numbers. Hiring is great but what matters is the path and opportunities for the hires beyond just filling seats. where will these people go in the future? Will they be seen as valuable team members, encouraged to move up, and rise up in the organization?

I'm happy to hear about Accessibility and diversity including individuals with disabilities. Yes, my wheelhouse, so let me crow a little ;). there are specific and unique challenges when dealing with neuro-diversity and ability-diversity that makes this challenging. If there is an opportunity to interview a hearing-impaired engineer, how will that interview be conducted? Is the burden of interviewing on the interviewer or interviewee. If an interpreter is needed, who is on the hook for that?

Tariq notes that successful teams are gauged on the amount of time they allow each other to speak, as well as to allow for the gender of the team members. It's interesting that successful teams tend to have more women involved. This mirrors my original experience with Cisco when the majority of the testing team was done by women engineers. Was that a fluke of Cisco or was that a broader tend when I hired on? I'll honestly never know but it was definitely an influence on me and my work over the years. I've long said that if it weren't for a cadre of women who helped me over the years, I might not be in this industry at all.

Lisette say that to just assume a woman would be an excellent fit for QA is wrong. It's interesting to me because I had wondered over the years if I was the odd one because I wanted to be a tester over those years I was at Cisco or that I was actively encouraged by women over those years to participate because that was their world and they were welcoming me in. as Cisco grew, it certainly seemed that the overall population of testers that were women was not as pronounced, though I would still say it was significant by the time I left Cisco in 2001. 

As the father of two daughters and a son, I have actively encouraged my daughters to pursue the goals that they want to. One of my daughters wants to work in aviation, ultimately to become a pilot someday. To that end, I have encouraged my daughters every bit as much as my son and I actively want to see them succeed in whatever role they wish to participate in. 

I am curious to see if COVID-19 and the cratering of expectations of working in a specific location long term might have on the overall attitude of who works where and when. Will this help to encourage a more diverse group of people to apply for jobs, especially if the older ex[pectation of packing up everything and moving someplace is not as actively sought after. 

sadly, this is all i can track as I have a workshop to prepare for so I will bid adieu here but a great conversation and glad to hear it while I could :).



PNSQC 2020 Live Blog: Towards A Culturally Inclusive Software Quality with Jack McDowell & Ying Ki Kwong



It's been interesting to see how we have seen the world adapt over the past thirty years that I have been in the software testing industry. In some ways, it's a very different world and yet at the same time, not much has changed. When I came into the testing world, there was a need for understanding requirements, there was a need to know how to test and there was a need to execute a certain number of tests, manually if we must but automated would be spectacular.

What I just described was 1991. Does it really sound that much different than 2020? Let's take a look at some other factors. Who is doing the work? For me, I remember thinking about this as I was working for Cisco Systems and I was hearing how few women or PoC were actively involved in tech. I frequently wondered what they were talking about as I was literally surrounded by women as peers and women as managers, as well as a rather diverse team as related to cultural and ethnic background. It would take me several years and watch the company grow to have that lens be changed and see the point. Where am I going with this? It's sometimes hard to see the issues and challenges we face when we think that the world being described doesn't match our experiences. I made the mistake of thinking the early days of Cisco Systems were indicative of the world of tech as a whole. 


Again, interesting but Michael, what are you going on about? Well, it's key to understanding Jack and Ying Ki's talk (and Jack is the one delivering it but since Ying Ki helped write the paper, I'm giving him credit, too ;) ). In any event, the point is that we see the world through our lens, and that lens may be accurate or not but we won't know it is accurate if we don't have other points of reference. With this idea, we should also be asking "what does quality actually mean?" It means whatever the team at large decides it means, and then we radiate out from there. However, we are missing a bit of the point that quality is going to be skewed if we don't seek to pay attention to what others think that quality is.  

An idea that Jack presents is that we not look at trees as a metaphor for meaning but instead look at rhizomes. A rhizome is a plant that spreads out a root system and buds, vines, etc grow out at a variety of points. we might look at these as being multiple plants but in truth, it's the same plant. 

Ha! Communities of meaning using Accessibility. We're in my court now (LOL!). Well, not really but it is interesting to see what examples for accessibility are used and what actually counts. Jack is correct that accessibility often looks at disabilities as a monolith when in reality, there are a variety of unique disabilities that require very different methods of accommodation. I'm all too familiar with the approach of "load up a screen reader, walk the application and call it a day". Yes, I've been there and so have a lot of others. The fact is, that approach leaves a *LOT* of things out of the mix or even giving consideration. Sight issues are not the same as hearing issues, and they have little to do with mobility issues, and those have little to do with cognitive issues, and on and on.

So Standards are often included in these quality questions. Standards can often be helpful but standards also tend to take a prescribed way to identify what makes for software quality. yet the question remains, does a standard take into account cultural differences of other approaches? Jack is arguing no and I'd say I agree with him. Jack just said that Agile Software Development is pretty much Anglo-centric. Yang Ki refutes that the way that agile software development works in the USA would not be culturally acceptable in Honk Kong, for example. The idea that individual engineers would be able to express their opinions or desires about products and openly question management. It also shows that in other cultures the Agile principles can be subverted and exploited. 

Again, the question we want to consider is "how does culture affect quality?" The answer is there is really no monoculture in all of this, there is a mixture of cultures that blend together and often help define these communities of meaning. How this is actually accomplished may well be a long and involved process but it is interesting to consider other communities of meaning to address these issues.




PNSQC 2020 Live Blog: Are You Ready For AI To Take Over Your Automation Testing? with Lisette Zounon

 


All right! How is that for a title ;)? I give props for coming out swinging and yes, I am indeed curious as to whether or not AI will actually have a long-term impact on what I do as a tester or automation programmer. It feels weird to say that but since "Senior Automation Engineer" is my official title, yeah, I kind of care about this topic :).

 

Many tools are built around allowing us to automate certain steps but in general, automation excels in the areas of the rote and the everyday repeatable. Automation is less good at dynamic environments and where there's a lot of variabilities. However, perhaps a better way to think about it is that automation struggles with areas that *we* feel are dynamic and not rote. Machine Learning can actually help us look for the repeatable, perhaps more specifically in areas that we are not currently seeing those patterns.
 
We are seeing the growth of pattern recognition tools and visual validation. As we get further into the process, we see that there are more uses for visual validation tools. It's not just is the picture in the same place. My question would be how can we leverage these tools for more approaches? As in most situations, a lack of imagination is not necessarily a lack of adventure or spirit but more often a lack of relevant experience. We tend to get mired in the specific details and we tend to look at these little tedious issues as taking too much time, requiring too many steps, or not being flexible enough to actually be useful.

Lisette makes the case that AI can help with API tests. Since API tests can be bounded (as in there's a set of commands and those commands can take a set of parameters, etc. So they can be generated and they can be called. In addition, auto-healing tests are often touted as a step forward., I will confess I have seen self-healing tests in only a limited capacity but that has more to do with what we use and what we currently test with rather than what is available and usable. I want to see more of this going forward and interact with it in a meaningful way.

I hear often the idea that AI will help to create more reliable automated tests. This comes down to agent counts keeping track of what is right or wrong by the definition of the software. It sounds cool on the surface but again, I'd love to see it in action. Lisette makes a good point that automation for the sake of automation doesn't really buy us anything. Our automation needs to serve a purpose so putting these AI tools to help us find critical paths or critical workflow steps and processes is worth the time. Again, I'm willing to give it a go :). 


PNSQC 2020 Live Blog: Let’s Focus More On Quality Engineering And Less About Testing with Joel Montvelisky

 



Joel starts out with a very blunt question... what value do you provide to your company? Not in an abstract sense but in a cash sense. Does your being there benefit the company in a bottom-line manner? It's a tough question in many ways because software testing is a cost center. No matter how we want to look at it, we are a cost of doing business. Granted, that cost may very well prove to be justified, especially if we find something that could be disastrous if it were to get out but it's hard to define how much money software testing earns for the company. truth is, unless our company is in the business of selling testing services, we are not really earning money for the company by doing testing.


this means that many testers often do something other than "just test". that has been the case with me many times, in that I put on a variety of hats when needed. sometimes that hat is tech support, sometimes it's systems builder, sometimes it's build manager, accessibility advocate, fill in the blank. In short, we do our best to add value in a variety of places but unless we are directly connected to the health of the organization and the generating of revenue, we are one step removed from the process and our value is less obvious and sometimes not recognized at all.
 
Joel makes the case that changing models of software development are applying pressures to the traditional role of a software tester. In the process, many testers are morphing into Quality Engineers. That's cool and all but what does that involve? Ultimately, it's that we work with developers to test and deliver products quickly, get feedback from the field, and continue to help develop software quickly and with high quality. Notice Joel is not saying "stop testing" but focus on additional areas that go beyond traditional testing. 

As I was listening to this, I kept thinking "oooh, this sounds so much like the A/B Testing podcast. and sure enough, that's EXACTLY where Joel was going with this (LOL!). For those not familiar, Alan Page and Brent Jensen host the A/B Testing podcast and they are also champions of Modern Testing (MT). In many ways, the move to DevOps is helping to emphasize the need and focus of testers to move into a more MT paradigm. There is still testing but the testing is less tied to individual testers and testing teams. I've witnessed this in my own organization. We have a testing team but all of us are individually embedded into separate teams. Our focus is less that of a testing team that reports to a manager and is the center for quality. Rather, we are often one or two people embedded in a specific team and while we test, we often work with developers to help them focus on quality and quality aspects. 

As we focus on quality and less on testing, we will find that we may be doing less rote testing but a lot more quality initiatives. We can be there early in focusing on developing quality requirements, we can get involved early and help programmers with questions and considerations as they are writing the code, we can get involved with automation and infrastructure so that much of the rote and humdrum steps can be codified and taken out of our front line attention. We can be involved in the build management and CI/CD pipeline and make sure it is working cleanly. I appreciate Joel including advocating for Usability and Accessibility. He also emphasizes having testers be part of bringing customers into the process, whether that be directly or indirectly. the better we understand the business cases and real-world uses, the more effective we can be at addressing the real quality issues. We also have the ability to teach developers about testing and test principles. I have myself had a number of situations where I've shared testing ideas and the developers have said "oh, that's cool, I've never looked at it that way." Be aware, though, that some developers simply do not want to test. That's OK, but it also needs to be clear that we are moving into other avenues and that if they don't test, there's no guarantee that we will either.

One thing's for sure, the future is going to be fluid, and to borrow from Ferris Bueller, "Life moves pretty fast. If you don’t stop and look around once in a while you could miss it."

PNSQC 2020 Live Blog: Test Engineers are the Connectors of Our Teams with Jenny Bramble

 


If you know Jenny, you know her enthusiasm and her energy. there's no way I'm going to capture that here, but I will do my best to capture her message at leat. 



Testers are in a unique location to bring the squad together, so to speak. In my musician days back thirty years ago, one of my bandmates said that I was the "psychic glue" that held us together. In many ways, I feel testers can be that "psychic glue" for an organization. However, to be psychic glue, it has a specific requirement and that is that we must be engaged. If we are not engaged with our team and with the other people in our organizations, we lose effectiveness.

Testers can be at the heart of communication but again, we need to make sure we are communicating to be actively involved. We talk about requirements, discuss bugs, present new features, and give the user’s perspective. No one else really has that level of reach, so let's make sure we are aware of that place we hold, and LET'S USE IT!!!

Teams are built on communication and teams fall apart without it. Sometimes I have found myself in situations where I have had to push into conversations. The fact is, we are welcome but we are not always invited. I think at times that's because we are often the bearer of bad news, or to borrow an old quote, no one likes the person who calls their baby ugly ;). that means that the ability to openly and honestly communicate belongs to us and it means we need to be careful but also deliberate in our interactions.
 
I have had experiences with being a musician, with being a competitive snowboarder, with being a scout leader, with being a husband, and with being a parent. That is on top of being a software tester. those are all contexts that require communicating in a little different. the way I communicate musical ideas is not going to work the same way as I communicate to scouts. Likewise, the other way around. It's not that the method of communication is all that different but it's the familiarity with concepts and approaches. Sometimes, I can just say a phrase, and my bandmates will know immediately what I mean. The reason for that is that we have interacted a great deal so we have stubbed our mutual toes a whole bunch. we've had lots of false starts. Because of that, we have a high tolerance for each other's eccentricities and that helps us communicate quickly. However, I know for a fact I could not communicate that way to the scouts I lead, or with the developers on my team. Familiarity breeds contempt, sure, but there's a benefit to that so-called contempt. I rarely get to that point with my scouts or my developers. thus I need to be a little more reserved in those capacities. On the other hand, I find that I can be more focused and granular with scouts and developers the way that my bandmates would pick me up and throw me out the door. again, style is important but communication is essential.

Jenny is highlighting that some people want a lot of details, while others want a high-level summary. A neat example she shared about how to work with a summary person is to have them review emails you plan to send. Sounds odd and awkward but at the same time, I get how that would work to get their attention and to give you hints as to how they prefer to communicate. Jenny also explains that she is a story based communicator (I love this revelation because it reminds me so much of me). The way she is describing this reminds me a little bit of "it's a baby penguin, caught on an iceberg"... and if you are not on TikTok, that may not make a lot of sense but yeah ;). the point is, invest in people and they will invest in you.

Regardless of how good we try to communicate, we often miss the mark ("What's a penguin? What's an iceberg?!"). It's not because we are dumb or indifferent, it's because we may jost not have hit on their preferred way to communicate. Some people don't like speaking but do great with texts. Some people struggle with email but do great on the phone. You just have to keep trying and find the communication mechanism that works. In short, if you intend to be a good dose of psychic glue, be prepared to communicate and be prepared to put in the time to learn other communication styles. By doing that, you can be that indispensable member of the team that no one could see functioning without you.

Tuesday, October 13, 2020

PNSQC 2020 Live Blog: Quality Engineering – Reflecting on the Past & Present to Accelerate the Future with Dr. Tafline Ramos

Ok, I hereby confess that the last talk was a hard one to wrap my head around so I'm looking forward to getting my head back on straight. Here's hoping this talk helps to do that (btw, just kidding about the last talk but I confess I've never considered floating point in depth in such a manner ever).

Our closing keynote tonight is the evolution of modern Quality Engineering. To understand where software quality is heading in the future, it helps to understand what we thought of Quality Engineering in the past.
 

It certainly seems that the conversation of Quality is circular. Many of the ideas of people like Deming and his points through to Margaret Hamilton and Glenford Myers. Somehow, the idea of testing being removed from development as a standalone discipline, and now we seem to be working to merge testing back into more traditional quality standards. the problem with having a dedicated tester (and so many of them for specific issues) that we created and refined the role of the tester to be at the tail end of the quality process. This reminds me of the space of testers in the early 1990s and how many there were vs the latter part of the 1990s and how many software testers there were and that specific separate group. 

Granted, I can only talk about the Cisco Systems approach in the 1990s as it was the only tech company I worked for at that point in time. Early in the days of Cisco, the software testers were not a dedicated group. Everyone who was a tester did something else in addition (my early testing years also included me being a Lab Administrator). I didn't get designated a Test Engineer proper until 1994 and that was when many others became branded as testers in a software sense and that being the core purpose for our work.


I've also seen that there does seem to be a divorce between the idea of building in quality upfront, and that there seemed to be a reliance on testers being the main line of defense for bugs. I remember that attitude and approach well, and in many ways, it is still that way. Ideally, we need to create Quality as a process at all levels. we need quality all the way at the very beginning of the story workshop that proposes a new feature. It needs to carry over through development, through having everyone at every level associated with quality and making it a core part of what they do. 

To be clear, I'm not saying people go out of their way to not deliver a quality product, just that our practices tend to make that approach harder than it really needs to be. It's interesting how shifting the word Test to Quality changes the whole conversation.

Cool, so what is a Quality Engineer, how do we become one? we would need a range of skills that are functional, technical, and consulting related, to be able to work effectively with a  number of groups, not just our own and not just related to testing. they need to be effective in methods, techniques, tools, and approaches that allow them to be effective in a variety of areas. A Quality Engineer also needs to beef up skills in technical areas, specifically coding. 

The fact is a large percentage of projects fail, badly enough to threaten a company's existence. A poor scope on requirements and quality overall ranks high in the reasons for this. There are a lot of techniques that can be used to help mitigate these failures. To reduce the overall cost of quality, it would be much more helpful to focus our energies early in the process, and let's stop the idea of testing at the late stage of development. Move the testing further back in the process and make quality the most important discipline in all groups as far back as possible.
 

PNSQC 2020 Live Blog: Testing Floating-Point Applications with Alan Jorgensen




OK, now I'm getting mildly anxious (LOL!). Hearing about floating-point errors being disastrous always makes me nervous because I frequently look at calculations and think to myself self "how precise do we need to actually be?"Also, wanted to give credit to this talk being presented by Connie Masters.




I've always operated on the goal that anything beyond tens of thousands is not that critical (heck as a little kid I learned Pi as 3.1416 because anything more precise was just not relevant (thanks, Grandpa ;) ). However, I know of course that in certain applications (chemistry and astronomy) there are of course considerably more points of accuracy with significant digits.  having to rethink this now, as I feel with digital systems and with huge sample sizes, rounding errors that are insignificant on their own can stack up and be a real issue. Also, at what point do we go from insignificant to catastrophic?

This is the first time I've heard the phrase "logarithms are the way to ad apples and oranges" and I'm not 100% sure I get the implication but it's a definitely memorable phrase. the only thing I am feeling for sure is the fact that I feel like what I've understood as discrete mathematics all these years is lacking... also, I don't quite know how I feel about that.                              The net result of all of this is that I need to get comfortable with bounded floating-point and get some practice with it.





PNSQC 2020 Live Blog: Quality Focused Software Testing In Critical Infrastructure with Zoë Oens




We are all familiar with the "Iron Triangle" where we get three sides; Faster, Cheaper, Better... Pick Two, because that's all you will be able to achieve.  One hundred percent coverage is unachievable. Especially if the goal is to save money in the process. still, what is the answer when you have to test software on a  critical system? when I say critical, and when Zoë says critical, we are talking things like the power systems, water, medical, banking.  Bugs in production are not trivial.



So what do we do when it comes to testing CI (critical infrastructure) software? How close to one hundred percent coverage can be achieved? If a feature fails, what is the fallout? How many are affected? While not specifically software related, some of you may know/remember that I have direct experience with a Critical Infrastructure failure in my community. My town of San Bruno suffered a major gas line explosion in 2009. Maly lives lost and many homes destroyed. It took years to rebuild and in many ways, the scars and memories are still fresh. 

When we are looking at testing in these critical areas, we have to be able to prioritize and determine the mission-critical stuff. Granted, we can't just declare everything as mission-critical but dealing with the electricity grip or gas supply, yes, critical becomes more meaningful. 

Zoë mentioned her time in manufacturing helped her approach the questions and issues where CI comes into play. Make sure that there is a spread of knowledge. CI is an area where there should be no silos. No one person should be the one to know how to fix problems when they occur, both at a coding and an operational level.

An emphasis on test writing, test setup, and test execution needs to be codified and well understood, as well as run frequently and aggressively to be sure that as many cases have been focused on and addressed as possible. Automation falls into this spare especially if the steps are codified and well known. anything repetitive, especially if it is rote repetitive, should be automated. The goal in CI environments is to be able to run testing consistently and effectively. While there is an upfront cost here, this cost can be amortized over time, especially if any of the tests are long-lived.

CI environments can sound scary but the key takeaway to me is to be mindful and diligent, as well as look for repetitive areas that can be codified, confirmed, and repeated.

PNSQC 2020 Live Blog: Teaching Testing To Programmers. What Sticks, And What Slides Off? with Rob Sabourin and Mónica Wodzislawski




I had a feeling that as soon as I saw who one of the presenters was for this talk that "OK, this is going to be fun!"


I've worked with Rob over the years when I was part of the Board of Directors of the Association for Software Testing and I remember talking with him about this very topic. There's no question that testers can learn how to program and work as developers. It also stands to reason that developers can learn the skills to become testers. 

Mónica Wodzislawski is new to me but I have learned recently about a vibrant community of software testing centered in Montevideo, Uruguay, and I can only assume Monica is actively engaged in that community.

Corporate initiatives to “shifting left” and the focus on Agile software development, Test-Driven Development, and other areas pay lip service to softeware testing but do not approach testing in the manner that a skilled tester would. That does not at all mean that good developers cannot be good testers. They most certainly can and many are. The idea of actually applying software testing as a discipline within the Computer Science curriculum is a good one. Over the years, I've heard only a few places where testing is even mentioned beyond writing unit tests. The idea of a semester-long course dedicated to software testing (akin to the BBST course of classes) is still a foreign concept in developer's learning journies. 

Some of the challenges that get in the way of testing are the notion that somehow software testing can't be seen as universal (and to be fair, it cannot, there is no one size fits all approach). It seems that there is a hangup on the tool stack. If you don't understand all of the tools then you can't do effective testing. There are going to be unique tool needs and there will be unique challenges related to specific tasks but even with that, there are many testing methodologies that can be leveraged regardless of the software stack being used. 

Often the approach to templating and patterns is heavily emphasized with some developers. Others tend to look at the code and automation as the end result. Automation is a good tool to have and it's important to understand and have them but there is a lot of software testing that doesn't involve automated testing.

Rob and Monical both emphasize looking at risk to help determine areas that might need testing at a preliminary or expanded level. Create models that stand up to the work that you need to do. By developing a model, we encourage developers and testers to focus their attention on the areas that are going to be the most vital to the business. There's plenty more to do but that's a good place to emphasize. 

By setting quality goals and examining case studies, software developers can learn a number of great skills including building conceptual models, examining and applying patterns and anti-patterns, and examining common faults and solutions that can be applied.