The paradigm shift in preventing bugs vs. catching them later

The notion of preventing bugs vs. catching them later, is not a new idea. This methodology has been around since the dawn of time. But has it really been in practice through the years, within companies who employ dedicated software testers?

The notion of “shifting testing left”, is a popular expression nowadays, but what does this really mean and how does it impact teams and testers in the new agile world of software development? In particular, the context of trying to prevent bugs vs. trying to catch them later.

Alan Page had a great tweet awhile ago about this paradigm shift in the perception about testing finding bugs:

I love this quote and this really should be the thought patterns for teams now. What is needed for today’s testers and teams, is this mindset shift to help make this happen and to shed some misconceptions about testers and testing in general.

If I hearken back to the glory days of waterfall, one of the greatest values of a tester was their unique ability to catch and prevent bugs from escaping into production. Developers would complete their work and the issue would be assigned to a tester to ‘assure the quality’ of the work that was done. Testers would delve deep into the issue trying to find bugs at all costs. Testers would be lauded for their abilities, and in many cases promoted, by finding bugs before they escaped into production and ‘save the company’ from embarrassment and the perception of a lack of quality and attention to detail by their customers.

I remember trying to prove myself by finding and logging copious numbers of bugs. I remember performance reviews listing the number of bugs found and also getting raises and promotions to ‘senior’ or ‘lead’ based on this. I remember being a developer for a few years and being warned to try and avoid assigning your issues to certain testers, as they would catch more problems then some of the others and cause more headaches and rework.

Thank goodness times are a changin’!

One difficulty for management is justifying having dedicated testers. It was easier back then, because they could point to the reams of bugs found and print out reports and talk about it in meetings. It is much harder now, as teams are raising less bugs and in some cases no bugs at all, since they fix them right away and do not waste time with bug repositories and the wasteful churn that occurs when doing this.

So, how do we value our testers?

I’ve blogged in the past about the purpose of our testing: https://testingfromthehip.wordpress.com/2016/06/14/what-is-the-main-purpose-of-our-testing/

… and also the value we should be providing our teams: https://testingfromthehip.wordpress.com/2016/01/08/am-i-really-a-valuable-member-of-my-team/

As managers, talk to your teams and find the value your testers are providing. Reward them for preventing issues from occurring, for helping deliver features that your customers love, for having meaningful quality and testing discussions prior to work starting, for pairing with developers early on, for bringing quality processes to your team that improve how you are building software.

As testers, don’t measure yourself with your ability to find bugs. We shouldn’t ever want to find any bugs. If you do, have discussions with your team about how to prevent the same thing from happening again and finding and fixing the gaps that allowed it to happen.

It can be rewarding to find bugs no doubt. But, share as much as you can about what your testing strategy is and how you are going to test. Don’t feel the need to keep that ‘ace’ in your back pocket to look clever later and find bugs.

Don’t be that guy/gal!

 

 

Test Cases Are Evil! Or are they?

Ah test cases, the artifact everyone loves to hate. Why the hatred? Why no love? I am well aware of the testing communities dislike for them. But let me expound on some possible virtues of test cases.

Now before you (and the rest of the twitterverse) explode in animosity and rage over this, please sit down, grab a nice cold beverage, open your minds and free your thoughts. Be open-minded, non-judgmental and realize that everyone’s context is different and we do not all work in a utopian work environment. If you do, you can just stop reading and go back to playing fetch with your unicorn and experimenting with the varied usages of fairy dust.

Many of the downsides of test cases are well documented in blogs and articles.  They tend to be things like:

  1. The executor is just ‘checking’ and not ‘testing’
  2. Following a script may make you miss other important problems
  3. The executor tends to shut off their brain while executing
  4. You are only validating a small slice of functionality
  5. Anyone can execute them – they are not a replacement for skilled testers doing real testing
  6. If the series of steps in a test case are that important, then they should be automated
  7. A testers time is much more valuable doing other things then executing mundane, scripted test cases
  8. Test cases are frequently out of date and need to be constantly updated and revised

I could go on and on. While I do not disagree with most of these sentiments, what makes them downfalls can also make them valuable.

Huh?

How dare you!

The following are my thoughts on the benefits of test cases and how they can support your testing. As with everything in testing, they are based on context.

Ok, everyone take a deep breath, let’s continue…

Time

The bane of every tester. “I wish I had more time to test” is one phrase I’m sure we all utter from time to time. It is undoubtedly one of our biggest constraints (besides ‘resources’ that I will touch on next). For many of us who are constantly having to multi-task, context-switch, and be responsible for large numbers of products and features, test cases can be a useful resource to aid in our testing.  For little known or rarely used features, they can be helpful in doing simple checks (or smoke tests if you prefer). If release time is approaching and you have a large number of features to test, they can be useful if time is a concern. Using proper risk heuristics, to help determine what needs testing vs. checking, they can help you execute some simple scenarios and better yet, get others involved if need be.

Resources

Most organizations do not have enough testers and we sometimes need to bring in others to help. If you work somewhere that does have enough testers, and you are only responsible for a small portfolio of products and features, then good for you. For those that do not, resource management can be a problem. Where test cases can help, is by easily getting others involved in the testing.  This is important in regular regression checking and release testing. If time and resources are a problem, the ability to get developers, product owners, technical writers, UX people or anyone involved in testing is sometimes necessary. Test cases are written in a way that is transferable and easy to digest. The problem with checklists, mind maps and the like, are that they are easily understood by the author but not by anyone else. Also, when time and resources are a concern, the time spent hand-holding others and trying to explain artifacts and how to test something is not feasible either.

Reminders

When you own lots of features, there are times when you constantly ask yourself “What is the workflow again?” or “How does this work again?” or “How do I setup and configure this again?” or “What data do I need again?”  I think you get the point. Test cases can be a great reminder for testers as to how to configure, setup, and execute basic workflows. When time is of the essence, they can make a huge difference. A strong, professional tester should always veer off the path while executing a test case or use it as a base to guide them in their testing.

Domain Knowledge

For some that work in a “heavy” domain (think banking, aerospace, defense, medical, etc), the domain concepts can be a challenge. Who the personas are, how they work and how they will use and interact with your software can be challenging and difficult to wrap your head around sometimes. Having test cases to detail this information can be very important in giving you that context. Rather then just using a checklist of functionality to check, tying that into a persona and how they use that functionality, is important.

Product and Feature Knowledge

There are times when we need to test things that we do not know very well. When you are given a testing assignment that has a quick turnaround time, this is difficult. Having never worked with a feature, seen it in action or tested or experimented with it before is a tough challenge for a tester. The existence of test case(s) can enable quick ramp up times to understand basic workflows, personas, data, and basic items that may be important to check. Used wisely as a resource for a skilled tester, it can be greatly appreciated and useful.

Developing Shared Understanding

There are many standards used in organizations to help share information on how to do things. From a testing perspective, there is really only one that is universally understood by everyone – testers, developers, technical writers, product owners, product management, executives, outsource partners, support, services, and customers.  This is the old test case. Now whether or not it is the best way to test something is debatable, what is not debatable is the universal language it represents. This is where it ‘may’ have merit. This if for you and your organization to figure out.

Technical Debt

Most of us have a lot of technical debt. There are older legacy features that customers use that were either built in the early days with little to no automation, your team has built but did not automate or you inherited features that came with none. Finding the time to automate legacy products and features is tough. Your organization should help you prioritize this effort. In the meantime, the execution of legacy test cases may be necessary.

Better Than Nothing

I have often referred to the execution of a test case as ‘better than nothing’ testing – BTNT. While I do agree with a lot of the aforementioned downfalls of using test cases, in many of the examples I have described above, it certainly is better than nothing.

These are but a few examples of why I think test cases can have merit. So, before you dismiss someone’s use of them, think about their context, ask them why they use them, and have a discussion about the possible downfalls and benefits.

I’ve often espoused the benefits of doing exploratory testing over the use of executing test cases. But, I do think they ‘can’ have their place in some contexts.

And yes, if you can, automate the damn things as well!

 

 

Raise problems NOT bugs!

Recently, I tweeted the following:

When communicating something found during , refer to it as a problem not a bug. You would be surprised how more engaged others get.

The reason I wanted to blog on this topic, is that I feel strongly in the value we provide as testers, one of which, is the ability to investigate and raise potential problems with your team(s).  I like to use the word ‘problem’ and not ‘bug’.  I will expand on why…

REASON 1: Bug – assumes we are quality gatekeepers.

Testers should not be the team member that unilaterally decides what it is and what is not a bug. We should be relaying observations or potential problems and allowing the team to investigate and make determinations on whether they should classify something as a bug. Teams have multiple disciplines with various skill sets and points of view (development, business, UX, etc), utilize them to help make this classification.

REASON 2: Bug – may not indicate you are seeking assistance.

Identifying something as a bug is not necessarily a cry for help or a request for assistance from your team. A problem can signify an alert to the team to get involved (which also leads into #3).

REASON 3: Bug – may assume the necessary due diligence has already been done.

When you relay to your team that you have found a bug, it can be assumed that it is indeed a bug, that you have done your due diligence and it does not need further investigation. This may lead to erroneous bug reports, unnecessary churn, or just a complete waste of time. Getting the whole team involved helps prevent these from happening and more importantly, trying to prevent them from happening again.

REASON 4: Bug – denotes something that needs fixing.

Again, this classification assumes that it is a problem that needs fixing.  But does it?  How do you know for sure?

REASON 5: Bug – assumes the customer would have a problem with what you have found.

Calling something a bug can be assumed that a real-life user of your product would also find a problem with what you have found. But would they?  How do you know?  What you think may be a problem, for them it may not be. Find out this information if you can.

REASON 6: Bug – can be downer to developers.

Calling something a bug can be a downer for a team and especially a developer. An assumption can be made that something was missed or implemented incorrectly by the developer. However, the problem may have originated from bad design, ambiguous or missing requirements, out of date specs, poor UX, or improper/misguided testing.  Raising something you find as a ‘problem’ instead, gets the whole team involved in the investigation, resolution and root cause. It does not finger point or improperly identify sources of the problem. Believe it or not, many people on your team love to investigate problems as much as you do!

REASON 7: Bug – can inspire a blase attitude.

An important aspect of what we do, is to find potential bugs for sure. But identifying something as a bug, can lead to blase attitudes among team members. “Ok, you found another bug, send it back to the developer who worked on it”. This is where you want your team members to become more engaged.  I have found when I identify something as a problem, the whole team wants to know what it is and get involved. Conversely, when I have said I found a bug, team members not associated to that issue, do not have the same level of engagement and curiosity.  Keep them curious and engaged!

This is not to say you should never raise bugs. Of course, there are some that may be obvious. But try getting in the habit of referring to everything you find as a problem, potential problem or observation and make note of the engagement levels of your team. Hopefully they go up. If they do not seem to go up, do not revert back to usurping your throne atop the quality gate.

Keep at it until it works.

What is the main purpose of our testing?

In a previous post, I discussed items that can help a tester provide value to his or her team: https://testingfromthehip.wordpress.com/2016/01/08/am-i-really-a-valuable-member-of-my-team/.

These ideas I brought forward, were more for someone in a tester ‘role’ and not necessarily the specific act of testing itself.

In this post, I want to address what the real goal of testing should be, and provide some ideas and suggestions about how our actual hands on testing can help. What prompted this post, was some recent interviews I had done with interviewees. Many struggled with giving examples of the value their testing provides. As testers, we should always be striving to provide maximum value from our testing. If we don’t know what this is ourselves, how can our organizations truly appreciate and value the work that we do?  To simply say things like ‘catch all bugs’, ‘execute all tests’, ‘release or regression test’, does not indicate what the point of our testing is, nor what value any of those activities help with.

In my humble opinion, the main purpose of testing can be summed up as…

Help your team make an informed opinion about the perceived quality of the products and features that they own.”

That’s it. That’s all.

I like to call it: “Perceived Quality Assessment”.  I will be briefly touching on ideas and techniques that can allow us to help our teams do this better. Before I continue further, I want to state that the notion of the ‘perception of quality’ is not new, nor is it my idea. There are many thought leaders who have written about this before, like Gerald Weinberg for example. This post is not to take credit for the concept of perceived quality, but my own take on it, and how we can help.

Before I get to my list, let’s first talk about perceived quality. A team can never really know the quality of their products or how they will be perceived in the marketplace. They can know things like: the number of open defects, the missed requirements, the poorly written / thought out designs, or early feedback from demos. Our testing should be helping to provide our teams with as much information as we can, to make an informed opinion on their own perceived view of quality. This perception should help teams decide things like: when to release / not release, when to enhance, when to bug fix, when to scale back, when to move on, etc.

The following, is a brief list of items that can help us provide this information to our teams. I am only including 5 items in this list for the sake of brevity. There are many more, but I like these 5 as starters 🙂

  1. Unknowns
  2. Risks
  3. Test Coverage – specifically what is not being tested
  4. Non functional testing like ‘quality attributes’
  5. Testing how customers and various personas use your features

Let’s talk unknowns. Testing should always be trying to uncover unknowns. This does not necessarily mean bugs. This can be anything we never thought of, may have overlooked, did not know about in the first place, or that may have dire consequences. This is all just information that your team needs to know about. Look hard, explore and find it. Your team needs you for this!

Let’s talk risk. Risk can manifest itself in many ways. As the aforementioned – unknowns for example. It can also be lurking in new bug fixes or enhancements that were added, other teams work that may impact yours, problems with integrating with other products or features, or really anything that has changed. Testing should always be monitoring these, investigating and exploring. This is something automation can do to a certain extent (by checking known conditions), but a good skilled tester can uncover those risky areas that automation does not know exist.

Let’s talk test coverage. One of my favorite topics that not many really understand that well. Without knowing what your testing (and automation) is covering and more specifically is not covering, is to me incredibly important information that your team needs to know. It’s great to have lots of testing in place, but your team needs to understand what that testing is all about and what is not being tested. This can roll up to the previous two topics I mentioned: unknowns and risk. Without understanding this, can a team really have an educated perceived quality assessment? I think not.

Let’s talk non-functional testing. Not that I am a fan of this terminology necessarily, but since it widely used I am using it here. A tester should always be testing to uncover potential problems with quality attributes that matter. These are also sometimes referred to as the ‘ilities’ of testing. These are critical in many customers minds when basing their perception of a software’s quality. Customers typically do not care that the requirements were met, that the acceptance criteria passed or that test cases have all been executed successfully. The perception of the quality of software can seriously be impacted by things like security, performance, accessibility, usability, and the like. Test for these. Your teams need to know this information.

Let’s talk how customers use software. Since software can be highly configurable, we need testing to flush out potential problems that may occur when using it in differing or in some cases, unintended ways. Our testing should be factoring these in. Seek out information on how your customers use your products and the various personas involved. Test accordingly. Your teams need this information too.

Take a moment and think of the value you are providing with your testing. Are you really helping your team make informed decisions? Are you testing the right things, at the right time? Are you a lighthouse, shining light so your teams boat can avoid obstacles and problems on its way to its destination?  Let’s hope you are!

Thanks for reading. I hope something resonated with you. If anyone agrees or disagrees with anything I wrote, feedback is always welcomed.

Test well my friends!

Use ‘Test Items’ to help improve upfront collaboration and quality

In the spirit of collaboration and getting testers involved early in the development of new feature stories, we have developed a quality initiative that helps us do this.  We are implementing ‘Test Items’ to be added to our story templates.

For some context… we have small agile teams (8-9 members) and we manage our work using a Kanban board. When a new story is added to the backlog, the whole team gets together to review, estimate, question and provide feedback before the story gets moved to our ‘Next Up’ column on our board.

A developer then grabs the story and starts doing their analysis and deciding on how to implement.

A tester also starts working on the issue as well. The tester does their own analysis of the issue, examines and modifies the acceptance criteria and has early discussions with the developer if they see something amiss. What we are now starting to do, is define test items. These are specific things that the tester is ‘thinking’ about testing, wants to propose or have a discussion about.  These are added to a section of our story template under ‘Testing Considerations’.  The test items should be simple bullet points and contain just enough information for everyone to understand the context.

Here is an example:

Testing Considerations:

T1:  Acceptance Criteria
T2:  Automation
T3:  Configuration parameter removal
T4:  New solr.properties file gets deployed when installing MX
T5:  Valid and invalid entries in solr.properties file
T6:  Starting, stopping, restarting Solr with no errors
T7:  Add part requirement flow still works as expected and parts are returned successfully
T8:  Quality Attributes: performance, security, reliability, usability, etc
T9:  Multiple Browsers
T10: Multi-User access
T11: Data integrity and validation

When the issue is now ready to be worked on, our team then does a ‘3 Amigo’ session where we involve the PO, the developer and the tester to have another discussion prior to work commencing. The developer talks to their implementation plan and ideas, and the tester talks to the various TI’s they have identified. The first two TI’s are typically the same, where we talk about the acceptance criteria, how we will test it manually and what our automation plans are.  The beauty of using TI’s, is we also talk about all the other items we are intending to test as well. This gets us all on board and agreeing to the items. The developer knows ahead of time what we will be testing (so there are no surprises), and we can have intelligent conversations about quality approaches before we even write a single line of code. With the annotations, we can easily reference them in our test sessions as well.

This approach allows us to have a standardized, repeatable process which gets everyone involved in testing discussions from the onset.  Identifying the test items early, gets everyone involved in the whole testing approach, and not solely focusing on the functionality or the acceptance testing.

A good tester is always thinking about all the things that may require testing. This allows us to share that information effectively.

 

The W5 Heuristic – Use it to define the ‘How’

I like to use mind maps from time to time to gather my thoughts.  The beauty of them is their simplicity. The ability to capture thoughts and ideas and display them in a visual nature instead of long winded paragraphs of words is great. They can be useful in some contexts, but certainly can be overdone and over used. Some people tend to ramble on with them. The more content and detail a map contains, the more it can lose its effectiveness.

One method I use from time to time, is to use the ‘W5 Heuristic’ to determine how I will approach work to be done. This can be done for test strategies, test design approaches, what and how to automate, defining problem statements and ideas for resolution or really anything you would like to get done.

The premise is simple. You create a map with the 5 W questions:  Why? Who? What? Where? When?

The responses and discussions should help with the ‘How’, and also in establishing action items going forward.

The following illustration is an example of one:

w5 heuristic snapshot

In this context, I was getting our testers together to help design a new testing repository for our test artifacts.  I used this method as a great way to answer some important questions. You then simply add nodes to them with information that you discuss.

Why?

I always start with ‘Why’.  If you cannot come up with good solid reasons why you want do something or see the value in it, then stop.  This is important in testing, as wasteful activities and processes have no place in our world.  Here we defined why we need a central repository.

Who?

Next is ‘Who’.  We have defined the problem statement and why we need the repository. Next I wanted to find out who would need this information. Which personas would benefit from this.  As with any testing activity, we should always think of who are the stakeholders and personas that may benefit from work that we do.

What?

Now that we have defined the ‘Why’ and the ‘Who’, we now want to think about ‘What’ they may want to see.  This can be applied as a model in any context really.  It helps put us into the mindset of the various personas and try to deliver max value.

Where?

‘Where’, depending on the context of what it is you are mapping out, it can be simple and straightforward like this example. It was decided that a wiki was the best option, but it could certainly have morphed into many options, which would be great for discussion.

When?

This helps establish some timelines, best guesses, priorities and anything else that may be time sensitive or where you may want to apply a deadline or timeline.

Defining all of these and having good discussions, should help in understanding of the ‘How’ you want to implement something.

So, if you are looking for a quick and simple method of trying to get answers, how to get things done, and set action items going forward, give the W5 Heuristic a shot.

 

 

 

Struggling to understand Alpha, Beta, MVP? You are not alone!

Since my transformation to agile, continuous delivery and the concept of attempting to deliver value to customers quicker than ever before, I have often struggled with the concept of an Alpha, Beta or MVP (Minimum Viable Product).  What makes it an alpha/beta? Viable to whom?  What makes it minimum?  How is it possibly a product in this early, unfinished state?

I recently read a very good article by Henrik Kniberg (@henrikkniberg) where he tries to explain the concept of MVP as well: http://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp.  I personally love his analogies to ‘Earliest Testable Product’, ‘Earliest Usable Product’ and best of all ‘Earliest Lovable Product’.  Although articles like this help in the understanding, it really does not help testers understand how to test them.

One of the major stumbling blocks for me as a tester is determining what to test, what not to test, and what to not worry about for now.  This often generates statements from the business like:  “It’s good enough for now”, “We’ll cross that bridge when we come to it”, “Let’s just wait until we get feedback”, “Um yeah that’s a problem but…”.

When my company switched to agile teams, my team was one of the first to build a separate component feature that was going to get into the hands of the customer early and often for feedback.  There were many challenges with this obviously. The toughest part for me (and my team) was understanding how to test this thing and what to care about from a quality perspective. The first time our PO said that we were going to ship the product to them at the end of the next sprint, I went into typical tester panic-mode!  Drop everything else, try and test everything, report everything I find, document everything, etc.  To encounter blase feedback to issues I was finding was disheartening and challenging to understand.  How can we deliver a product with all these incomplete pieces, with all these known issues (many not even raised) but more importantly – how can I help the team make informed decisions about the level of quality and what is important for the team to know about prior to shipping?

Examples of many of the things I found were:  design wire frames did not match what we had, use cases were incomplete and workflows were not executable, usability issues, functionality bugs, performance and reliability issues, data validation and error handling not done and the list goes on and on.

I tried my best to model my testing to represent the product in its current state of development, so I always knew what it was supposed to look like and behave like at any given point in time.  I was doing this well. But, how do I handle all the issues I was finding and how do I wrap my head around the concept and idea that the product is ‘good enough for now?’  As a tester, I found this very difficult.

I still do not have all the answers and always welcome feedback and ideas!  Below are some items that I do that help:

  • Document known issues (not necessarily as raised bugs) and keep them around for discussion purposes with the team.
  • Continually provide feedback to the team as to your own perceived level of quality – find ways of doing this effectively.
  • Get the whole team involved in traditional testing activities (including the PO). You would be surprised how quickly issues can get fixed when others see how annoying they are too!
  • Get the UX/UI designers involved in the testing of design and usability issues. This can help in making the call that ‘this is good enough for now’. Pairing is an awesome way to do this.
  • Functional workflows – inform your team about the inability of users to perform specific actions in your product. It may not quite be as MVP as they originally thought.
  • Create a mechanism to ‘talk testing’ and not just about bugs. Often it is hard to talk about testing, the challenges you encounter, the risks, and the health of your product.
  • Disassociate yourself from the notion of trying to test everything! It cannot be done nor does it need to be done. Continually have discussions with your team about what are the most important things to test today, tomorrow and as time goes on.
  • Instead of simply reporting problems, inform the team about the impacts these problems have. These are much better discussions to have.

Most importantly, don’t sweat it so much!  It doesn’t have to be perfect.  It is early, expectations are lower, and there should be an understanding amongst the team, the customer and the business that this is an iterative approach to development and delivery.  Your team will get there.  The customer is getting involved earlier in the process and incomplete or not, this is a way better way of delivering value to your customers.