Test Cases Are Evil! Or are they?

Ah test cases, the artifact everyone loves to hate. Why the hatred? Why no love? I am well aware of the testing communities dislike for them. But let me expound on some possible virtues of test cases.

Now before you (and the rest of the twitterverse) explode in animosity and rage over this, please sit down, grab a nice cold beverage, open your minds and free your thoughts. Be open-minded, non-judgmental and realize that everyone’s context is different and we do not all work in a utopian work environment. If you do, you can just stop reading and go back to playing fetch with your unicorn and experimenting with the varied usages of fairy dust.

Many of the downsides of test cases are well documented in blogs and articles.  They tend to be things like:

  1. The executor is just ‘checking’ and not ‘testing’
  2. Following a script may make you miss other important problems
  3. The executor tends to shut off their brain while executing
  4. You are only validating a small slice of functionality
  5. Anyone can execute them – they are not a replacement for skilled testers doing real testing
  6. If the series of steps in a test case are that important, then they should be automated
  7. A testers time is much more valuable doing other things then executing mundane, scripted test cases
  8. Test cases are frequently out of date and need to be constantly updated and revised

I could go on and on. While I do not disagree with most of these sentiments, what makes them downfalls can also make them valuable.


How dare you!

The following are my thoughts on the benefits of test cases and how they can support your testing. As with everything in testing, they are based on context.

Ok, everyone take a deep breath, let’s continue…


The bane of every tester. “I wish I had more time to test” is one phrase I’m sure we all utter from time to time. It is undoubtedly one of our biggest constraints (besides ‘resources’ that I will touch on next). For many of us who are constantly having to multi-task, context-switch, and be responsible for large numbers of products and features, test cases can be a useful resource to aid in our testing.  For little known or rarely used features, they can be helpful in doing simple checks (or smoke tests if you prefer). If release time is approaching and you have a large number of features to test, they can be useful if time is a concern. Using proper risk heuristics, to help determine what needs testing vs. checking, they can help you execute some simple scenarios and better yet, get others involved if need be.


Most organizations do not have enough testers and we sometimes need to bring in others to help. If you work somewhere that does have enough testers, and you are only responsible for a small portfolio of products and features, then good for you. For those that do not, resource management can be a problem. Where test cases can help, is by easily getting others involved in the testing.  This is important in regular regression checking and release testing. If time and resources are a problem, the ability to get developers, product owners, technical writers, UX people or anyone involved in testing is sometimes necessary. Test cases are written in a way that is transferable and easy to digest. The problem with checklists, mind maps and the like, are that they are easily understood by the author but not by anyone else. Also, when time and resources are a concern, the time spent hand-holding others and trying to explain artifacts and how to test something is not feasible either.


When you own lots of features, there are times when you constantly ask yourself “What is the workflow again?” or “How does this work again?” or “How do I setup and configure this again?” or “What data do I need again?”  I think you get the point. Test cases can be a great reminder for testers as to how to configure, setup, and execute basic workflows. When time is of the essence, they can make a huge difference. A strong, professional tester should always veer off the path while executing a test case or use it as a base to guide them in their testing.

Domain Knowledge

For some that work in a “heavy” domain (think banking, aerospace, defense, medical, etc), the domain concepts can be a challenge. Who the personas are, how they work and how they will use and interact with your software can be challenging and difficult to wrap your head around sometimes. Having test cases to detail this information can be very important in giving you that context. Rather then just using a checklist of functionality to check, tying that into a persona and how they use that functionality, is important.

Product and Feature Knowledge

There are times when we need to test things that we do not know very well. When you are given a testing assignment that has a quick turnaround time, this is difficult. Having never worked with a feature, seen it in action or tested or experimented with it before is a tough challenge for a tester. The existence of test case(s) can enable quick ramp up times to understand basic workflows, personas, data, and basic items that may be important to check. Used wisely as a resource for a skilled tester, it can be greatly appreciated and useful.

Developing Shared Understanding

There are many standards used in organizations to help share information on how to do things. From a testing perspective, there is really only one that is universally understood by everyone – testers, developers, technical writers, product owners, product management, executives, outsource partners, support, services, and customers.  This is the old test case. Now whether or not it is the best way to test something is debatable, what is not debatable is the universal language it represents. This is where it ‘may’ have merit. This if for you and your organization to figure out.

Technical Debt

Most of us have a lot of technical debt. There are older legacy features that customers use that were either built in the early days with little to no automation, your team has built but did not automate or you inherited features that came with none. Finding the time to automate legacy products and features is tough. Your organization should help you prioritize this effort. In the meantime, the execution of legacy test cases may be necessary.

Better Than Nothing

I have often referred to the execution of a test case as ‘better than nothing’ testing – BTNT. While I do agree with a lot of the aforementioned downfalls of using test cases, in many of the examples I have described above, it certainly is better than nothing.

These are but a few examples of why I think test cases can have merit. So, before you dismiss someone’s use of them, think about their context, ask them why they use them, and have a discussion about the possible downfalls and benefits.

I’ve often espoused the benefits of doing exploratory testing over the use of executing test cases. But, I do think they ‘can’ have their place in some contexts.

And yes, if you can, automate the damn things as well!




Raise problems NOT bugs!

Recently, I tweeted the following:

When communicating something found during , refer to it as a problem not a bug. You would be surprised how more engaged others get.

The reason I wanted to blog on this topic, is that I feel strongly in the value we provide as testers, one of which, is the ability to investigate and raise potential problems with your team(s).  I like to use the word ‘problem’ and not ‘bug’.  I will expand on why…

REASON 1: Bug – assumes we are quality gatekeepers.

Testers should not be the team member that unilaterally decides what it is and what is not a bug. We should be relaying observations or potential problems and allowing the team to investigate and make determinations on whether they should classify something as a bug. Teams have multiple disciplines with various skill sets and points of view (development, business, UX, etc), utilize them to help make this classification.

REASON 2: Bug – may not indicate you are seeking assistance.

Identifying something as a bug is not necessarily a cry for help or a request for assistance from your team. A problem can signify an alert to the team to get involved (which also leads into #3).

REASON 3: Bug – may assume the necessary due diligence has already been done.

When you relay to your team that you have found a bug, it can be assumed that it is indeed a bug, that you have done your due diligence and it does not need further investigation. This may lead to erroneous bug reports, unnecessary churn, or just a complete waste of time. Getting the whole team involved helps prevent these from happening and more importantly, trying to prevent them from happening again.

REASON 4: Bug – denotes something that needs fixing.

Again, this classification assumes that it is a problem that needs fixing.  But does it?  How do you know for sure?

REASON 5: Bug – assumes the customer would have a problem with what you have found.

Calling something a bug can be assumed that a real-life user of your product would also find a problem with what you have found. But would they?  How do you know?  What you think may be a problem, for them it may not be. Find out this information if you can.

REASON 6: Bug – can be downer to developers.

Calling something a bug can be a downer for a team and especially a developer. An assumption can be made that something was missed or implemented incorrectly by the developer. However, the problem may have originated from bad design, ambiguous or missing requirements, out of date specs, poor UX, or improper/misguided testing.  Raising something you find as a ‘problem’ instead, gets the whole team involved in the investigation, resolution and root cause. It does not finger point or improperly identify sources of the problem. Believe it or not, many people on your team love to investigate problems as much as you do!

REASON 7: Bug – can inspire a blase attitude.

An important aspect of what we do, is to find potential bugs for sure. But identifying something as a bug, can lead to blase attitudes among team members. “Ok, you found another bug, send it back to the developer who worked on it”. This is where you want your team members to become more engaged.  I have found when I identify something as a problem, the whole team wants to know what it is and get involved. Conversely, when I have said I found a bug, team members not associated to that issue, do not have the same level of engagement and curiosity.  Keep them curious and engaged!

This is not to say you should never raise bugs. Of course, there are some that may be obvious. But try getting in the habit of referring to everything you find as a problem, potential problem or observation and make note of the engagement levels of your team. Hopefully they go up. If they do not seem to go up, do not revert back to usurping your throne atop the quality gate.

Keep at it until it works.

What is the main purpose of our testing?

In a previous post, I discussed items that can help a tester provide value to his or her team: https://testingfromthehip.wordpress.com/2016/01/08/am-i-really-a-valuable-member-of-my-team/.

These ideas I brought forward, were more for someone in a tester ‘role’ and not necessarily the specific act of testing itself.

In this post, I want to address what the real goal of testing should be, and provide some ideas and suggestions about how our actual hands on testing can help. What prompted this post, was some recent interviews I had done with interviewees. Many struggled with giving examples of the value their testing provides. As testers, we should always be striving to provide maximum value from our testing. If we don’t know what this is ourselves, how can our organizations truly appreciate and value the work that we do?  To simply say things like ‘catch all bugs’, ‘execute all tests’, ‘release or regression test’, does not indicate what the point of our testing is, nor what value any of those activities help with.

In my humble opinion, the main purpose of testing can be summed up as…

Help your team make an informed opinion about the perceived quality of the products and features that they own.”

That’s it. That’s all.

I like to call it: “Perceived Quality Assessment”.  I will be briefly touching on ideas and techniques that can allow us to help our teams do this better. Before I continue further, I want to state that the notion of the ‘perception of quality’ is not new, nor is it my idea. There are many thought leaders who have written about this before, like Gerald Weinberg for example. This post is not to take credit for the concept of perceived quality, but my own take on it, and how we can help.

Before I get to my list, let’s first talk about perceived quality. A team can never really know the quality of their products or how they will be perceived in the marketplace. They can know things like: the number of open defects, the missed requirements, the poorly written / thought out designs, or early feedback from demos. Our testing should be helping to provide our teams with as much information as we can, to make an informed opinion on their own perceived view of quality. This perception should help teams decide things like: when to release / not release, when to enhance, when to bug fix, when to scale back, when to move on, etc.

The following, is a brief list of items that can help us provide this information to our teams. I am only including 5 items in this list for the sake of brevity. There are many more, but I like these 5 as starters 🙂

  1. Unknowns
  2. Risks
  3. Test Coverage – specifically what is not being tested
  4. Non functional testing like ‘quality attributes’
  5. Testing how customers and various personas use your features

Let’s talk unknowns. Testing should always be trying to uncover unknowns. This does not necessarily mean bugs. This can be anything we never thought of, may have overlooked, did not know about in the first place, or that may have dire consequences. This is all just information that your team needs to know about. Look hard, explore and find it. Your team needs you for this!

Let’s talk risk. Risk can manifest itself in many ways. As the aforementioned – unknowns for example. It can also be lurking in new bug fixes or enhancements that were added, other teams work that may impact yours, problems with integrating with other products or features, or really anything that has changed. Testing should always be monitoring these, investigating and exploring. This is something automation can do to a certain extent (by checking known conditions), but a good skilled tester can uncover those risky areas that automation does not know exist.

Let’s talk test coverage. One of my favorite topics that not many really understand that well. Without knowing what your testing (and automation) is covering and more specifically is not covering, is to me incredibly important information that your team needs to know. It’s great to have lots of testing in place, but your team needs to understand what that testing is all about and what is not being tested. This can roll up to the previous two topics I mentioned: unknowns and risk. Without understanding this, can a team really have an educated perceived quality assessment? I think not.

Let’s talk non-functional testing. Not that I am a fan of this terminology necessarily, but since it widely used I am using it here. A tester should always be testing to uncover potential problems with quality attributes that matter. These are also sometimes referred to as the ‘ilities’ of testing. These are critical in many customers minds when basing their perception of a software’s quality. Customers typically do not care that the requirements were met, that the acceptance criteria passed or that test cases have all been executed successfully. The perception of the quality of software can seriously be impacted by things like security, performance, accessibility, usability, and the like. Test for these. Your teams need to know this information.

Let’s talk how customers use software. Since software can be highly configurable, we need testing to flush out potential problems that may occur when using it in differing or in some cases, unintended ways. Our testing should be factoring these in. Seek out information on how your customers use your products and the various personas involved. Test accordingly. Your teams need this information too.

Take a moment and think of the value you are providing with your testing. Are you really helping your team make informed decisions? Are you testing the right things, at the right time? Are you a lighthouse, shining light so your teams boat can avoid obstacles and problems on its way to its destination?  Let’s hope you are!

Thanks for reading. I hope something resonated with you. If anyone agrees or disagrees with anything I wrote, feedback is always welcomed.

Test well my friends!

Use ‘Test Items’ to help improve upfront collaboration and quality

In the spirit of collaboration and getting testers involved early in the development of new feature stories, we have developed a quality initiative that helps us do this.  We are implementing ‘Test Items’ to be added to our story templates.

For some context… we have small agile teams (8-9 members) and we manage our work using a Kanban board. When a new story is added to the backlog, the whole team gets together to review, estimate, question and provide feedback before the story gets moved to our ‘Next Up’ column on our board.

A developer then grabs the story and starts doing their analysis and deciding on how to implement.

A tester also starts working on the issue as well. The tester does their own analysis of the issue, examines and modifies the acceptance criteria and has early discussions with the developer if they see something amiss. What we are now starting to do, is define test items. These are specific things that the tester is ‘thinking’ about testing, wants to propose or have a discussion about.  These are added to a section of our story template under ‘Testing Considerations’.  The test items should be simple bullet points and contain just enough information for everyone to understand the context.

Here is an example:

Testing Considerations:

T1:  Acceptance Criteria
T2:  Automation
T3:  Configuration parameter removal
T4:  New solr.properties file gets deployed when installing MX
T5:  Valid and invalid entries in solr.properties file
T6:  Starting, stopping, restarting Solr with no errors
T7:  Add part requirement flow still works as expected and parts are returned successfully
T8:  Quality Attributes: performance, security, reliability, usability, etc
T9:  Multiple Browsers
T10: Multi-User access
T11: Data integrity and validation

When the issue is now ready to be worked on, our team then does a ‘3 Amigo’ session where we involve the PO, the developer and the tester to have another discussion prior to work commencing. The developer talks to their implementation plan and ideas, and the tester talks to the various TI’s they have identified. The first two TI’s are typically the same, where we talk about the acceptance criteria, how we will test it manually and what our automation plans are.  The beauty of using TI’s, is we also talk about all the other items we are intending to test as well. This gets us all on board and agreeing to the items. The developer knows ahead of time what we will be testing (so there are no surprises), and we can have intelligent conversations about quality approaches before we even write a single line of code. With the annotations, we can easily reference them in our test sessions as well.

This approach allows us to have a standardized, repeatable process which gets everyone involved in testing discussions from the onset.  Identifying the test items early, gets everyone involved in the whole testing approach, and not solely focusing on the functionality or the acceptance testing.

A good tester is always thinking about all the things that may require testing. This allows us to share that information effectively.


The W5 Heuristic – Use it to define the ‘How’

I like to use mind maps from time to time to gather my thoughts.  The beauty of them is their simplicity. The ability to capture thoughts and ideas and display them in a visual nature instead of long winded paragraphs of words is great. They can be useful in some contexts, but certainly can be overdone and over used. Some people tend to ramble on with them. The more content and detail a map contains, the more it can lose its effectiveness.

One method I use from time to time, is to use the ‘W5 Heuristic’ to determine how I will approach work to be done. This can be done for test strategies, test design approaches, what and how to automate, defining problem statements and ideas for resolution or really anything you would like to get done.

The premise is simple. You create a map with the 5 W questions:  Why? Who? What? Where? When?

The responses and discussions should help with the ‘How’, and also in establishing action items going forward.

The following illustration is an example of one:

w5 heuristic snapshot

In this context, I was getting our testers together to help design a new testing repository for our test artifacts.  I used this method as a great way to answer some important questions. You then simply add nodes to them with information that you discuss.


I always start with ‘Why’.  If you cannot come up with good solid reasons why you want do something or see the value in it, then stop.  This is important in testing, as wasteful activities and processes have no place in our world.  Here we defined why we need a central repository.


Next is ‘Who’.  We have defined the problem statement and why we need the repository. Next I wanted to find out who would need this information. Which personas would benefit from this.  As with any testing activity, we should always think of who are the stakeholders and personas that may benefit from work that we do.


Now that we have defined the ‘Why’ and the ‘Who’, we now want to think about ‘What’ they may want to see.  This can be applied as a model in any context really.  It helps put us into the mindset of the various personas and try to deliver max value.


‘Where’, depending on the context of what it is you are mapping out, it can be simple and straightforward like this example. It was decided that a wiki was the best option, but it could certainly have morphed into many options, which would be great for discussion.


This helps establish some timelines, best guesses, priorities and anything else that may be time sensitive or where you may want to apply a deadline or timeline.

Defining all of these and having good discussions, should help in understanding of the ‘How’ you want to implement something.

So, if you are looking for a quick and simple method of trying to get answers, how to get things done, and set action items going forward, give the W5 Heuristic a shot.




Struggling to understand Alpha, Beta, MVP? You are not alone!

Since my transformation to agile, continuous delivery and the concept of attempting to deliver value to customers quicker than ever before, I have often struggled with the concept of an Alpha, Beta or MVP (Minimum Viable Product).  What makes it an alpha/beta? Viable to whom?  What makes it minimum?  How is it possibly a product in this early, unfinished state?

I recently read a very good article by Henrik Kniberg (@henrikkniberg) where he tries to explain the concept of MVP as well: http://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp.  I personally love his analogies to ‘Earliest Testable Product’, ‘Earliest Usable Product’ and best of all ‘Earliest Lovable Product’.  Although articles like this help in the understanding, it really does not help testers understand how to test them.

One of the major stumbling blocks for me as a tester is determining what to test, what not to test, and what to not worry about for now.  This often generates statements from the business like:  “It’s good enough for now”, “We’ll cross that bridge when we come to it”, “Let’s just wait until we get feedback”, “Um yeah that’s a problem but…”.

When my company switched to agile teams, my team was one of the first to build a separate component feature that was going to get into the hands of the customer early and often for feedback.  There were many challenges with this obviously. The toughest part for me (and my team) was understanding how to test this thing and what to care about from a quality perspective. The first time our PO said that we were going to ship the product to them at the end of the next sprint, I went into typical tester panic-mode!  Drop everything else, try and test everything, report everything I find, document everything, etc.  To encounter blase feedback to issues I was finding was disheartening and challenging to understand.  How can we deliver a product with all these incomplete pieces, with all these known issues (many not even raised) but more importantly – how can I help the team make informed decisions about the level of quality and what is important for the team to know about prior to shipping?

Examples of many of the things I found were:  design wire frames did not match what we had, use cases were incomplete and workflows were not executable, usability issues, functionality bugs, performance and reliability issues, data validation and error handling not done and the list goes on and on.

I tried my best to model my testing to represent the product in its current state of development, so I always knew what it was supposed to look like and behave like at any given point in time.  I was doing this well. But, how do I handle all the issues I was finding and how do I wrap my head around the concept and idea that the product is ‘good enough for now?’  As a tester, I found this very difficult.

I still do not have all the answers and always welcome feedback and ideas!  Below are some items that I do that help:

  • Document known issues (not necessarily as raised bugs) and keep them around for discussion purposes with the team.
  • Continually provide feedback to the team as to your own perceived level of quality – find ways of doing this effectively.
  • Get the whole team involved in traditional testing activities (including the PO). You would be surprised how quickly issues can get fixed when others see how annoying they are too!
  • Get the UX/UI designers involved in the testing of design and usability issues. This can help in making the call that ‘this is good enough for now’. Pairing is an awesome way to do this.
  • Functional workflows – inform your team about the inability of users to perform specific actions in your product. It may not quite be as MVP as they originally thought.
  • Create a mechanism to ‘talk testing’ and not just about bugs. Often it is hard to talk about testing, the challenges you encounter, the risks, and the health of your product.
  • Disassociate yourself from the notion of trying to test everything! It cannot be done nor does it need to be done. Continually have discussions with your team about what are the most important things to test today, tomorrow and as time goes on.
  • Instead of simply reporting problems, inform the team about the impacts these problems have. These are much better discussions to have.

Most importantly, don’t sweat it so much!  It doesn’t have to be perfect.  It is early, expectations are lower, and there should be an understanding amongst the team, the customer and the business that this is an iterative approach to development and delivery.  Your team will get there.  The customer is getting involved earlier in the process and incomplete or not, this is a way better way of delivering value to your customers.






Am I really a valuable member of my team?

I’ve been sitting around with a broken ankle and it has given me an opportunity to catch up on social media, twitter, blogs and articles (I wish it was a cool story like ‘hey guys, hold my beer, watch this…’ but sadly it was not).

There have been many tweets going around lately about the value, or more specifically, the lack of value, of having dedicated testers on teams. Not a new thread but a popular one nonetheless.

Testing is hard. Talking about testing is hard. Showing teams the value of testing is hard. We need to be better at sharing this knowledge and having stakeholders rely on us for vital aspects of the software development process.

If you feel as a tester, that your value is not appreciated, that you are deemed to be insignificant or can be easily replaced, then shame on you.  Perhaps take a look in the mirror and question yourself about the value you are adding to your team.

In Agile, ideally we want to have these homogeneous teams with heterogeneous capabilities. We want everyone to have diverse skill sets with the ability to jump in where needed and get the job done. I believe that there is still a need for specializations however. Testing is one of those specializations. On the surface it appears easy and replaceable, but when really analyzed, the value is tremendous.

Here are some ideas for the value you should be providing. This list is by no means complete. Think to yourself – am I doing all this stuff?  If not, does my team really value me and my contributions and am I doing all I can to make my team succeed?

Product and Feature Knowledge:

If your team does not come to you to ask questions about the products and features, information about the users or personas, the functionality, or use cases or workflows – then you are failing.  As a tester, you have a unique ability to not be bogged down by technology, code, and various frameworks that developers have to deal with. Take the time to dig deep. You should be the expert, no one should know your products and features better then you!

Domain Knowledge:

Many think domain knowledge is solely the primary focus of the PO or Business Analyst. I think it should also be the tester. Who is better suited on your team to represent the users than you? If you do not understand your users day to day job, how they will interact with the system, the industry they work in and how we are trying to help them, then you are failing. Learn about the industry, subscribe to magazines, bookmark industry specific websites. Take some scheduled time every week or month to learn more. It will help you question designs better, write better acceptance criteria and tests, design better scenarios, but more importantly how you can help your user base do their job better and more efficiently!

Installations and Configurations:

Since you install, uninstall, and configure your applications hundreds or thousands of times, you are the teams expert in this field. Since most applications today can be configured many different ways, you should be invaluable to your teams. Learn it, understand it, dig deep, and experiment!

Test Strategy:

Everything you test should have a test strategy applied to it. Do not follow a templated approach and apply that to everything you test. Context of what you are building is key with testing and this is where a skilled tester can really shine. Otherwise, teams can tend to follow the same iterative approach to testing everything and not be doing the best they can.

Quality Focus:

Everyone wants to think of quality and have it be foremost in everyone’s mind. But no matter what anyone says, it is not.  But it should be for you!  Your job is to keep everyone on track, thinking of quality first, how you can help in this area and make your team be successful and deliver high quality features. Always be thinking of quality initiatives you can introduce to help your team.

Thinking Beyond Functionality:

This is where many think testing is easy. All we have to do is test the acceptance criteria, execute a simple test or scenario and close off the story. There is way more to testing a story then that. Educate yourself and your teams on test modelling, design, exploratory testing, assessing risk, identifying and testing quality criteria, boundary and edge cases, etc. These should all be thought of before closing off your stories. You are the tester who should be quality focused – guide your teams and educate them as to what testing is really all about!

What To Test and When:

Testing is expensive, 100 percent code coverage is impossible. We cannot test everything. You need to use your experience and knowledge to test what matters the most and utilize the time you have to do the best you can. When tasked with ‘We need to test feature X’, education and communication is important to do this well.  Be smart in how you test, what you test, when you test and articulate your testing and decision making to your team. Testing is a thinking activity and should not only be relegated to running a series of automated checks and executing some overly prescriptive test cases.

Exploratory Testing:

To be clear this is not adhoc testing people!  This does not mean ‘play around with it for a few hours and see if you find any problems’.  I am not going to get into what exploratory testing is all about. If you don’t really know, then you are failing as a tester. This is what testing is all about, not running through a test plan and test cases. Learn it, share how to do it with your teams and think beyond the requirements.


You should always be risk averse.  You are your teams risk analyst. Always monitor what new functionality is being added, what bug fixes are being applied, and what change requests are being implemented. Your teams will only care if knowledge they have coded before still passes (automated checks). You are there to think deeper, react to change and use your skills and training to uncover other potential problems.

Applying Oracles and Heuristics:

How we do know what to test and how do we measure the success or failure?  This is where the use of oracles and heuristics come into play.  If you do not know what these are, how to use them as part of your test strategy, then learn about them and do it.  It will change your testing forever.

Test Artifacts:

Gone are the days of the long test plans containing reams of prescribed test cases (or hopefully we are getting there). We need to be smarter about how we document testing for the future. Again, context is key here big time.  If something is overly complex and tightly integrated with specific data and results (like a report) we need to document that differently then we would a page where we enter customer details.  Document accordingly with just enough information required. Use previous testing to help guide you. Smart test documentation is key and a vital component to what we do.

Product / Feature Health:

Continual monitoring of the health of your products and features is important. Come up with ways to show this to your teams and help them make quality-based decisions. You would be surprised how valuable this can really be!

Targeted Release and Regression Testing (not checking):

You have tons of automation (unittest, ui, api), congrats, job well done. These are very important and key to your teams success.  We cannot manually test everything all the time.  But remember, this is not testing, this is just checking that everything still works as expected based on the knowledge you had at the time. Targeted release and regression testing should still be done. Products change, code changes, and impacts to functionality changes over time.  Be proactive, and smart with your testing to target things that your automation may not cover or more importantly – does not know about!

What To Automate:

You are the key here.  We want to limit the amount of automation we have.  We want the automation to be targeted and not have duplication.  This requires a lot of analysis, planning, design and execution.  Start small, isolate what is important to automate first. Take the lead, find out what your devs can do, find out what you can do, decide as a team if you are happy with the combined coverage.  Be smart about automation and you will be very happy you have it!

I could list off another dozen items, but for the sake of brevity, I am capping the number for the purposes of this blog. I am not saying that any of these cannot be done by other team members.  Other team members can certainly do any of these, but they will not be top priority in their day to day activities.

They should be for you!