Instead of shifting testing to the left – move it there completely!

A lot of articles and discussions are happening with regards to shifting testing to the left. From an agile perspective, this means to move more of the testing effort to the left on the board into earlier swim lanes such as DISCOVERY, BACKLOG, NEXT UP, ANALYSIS, DEVELOPMENT, etc. The ability to get testing involved from the onset is a great thing.

While I wholeheartedly agree with this approach, I’m also of the mindset that this is where testing should reside full-time, and that in most cases, they do not need to be involved on the ‘right side’ of the board.

I’ve always questioned the value of having testers acting as “gate keepers”, “double checkers” or “QA’ers” to execute the same tests that their teammates who are building software can do themselves. This is quite often a wasteful exercise with minimal value.  In the olden days of waterfall, this was necessary, as developers were often isolated from testing and weren’t even aware of how their issues were going to be tested. With agile, comes the ability to have testers work with developers to design the test strategy, what and how it will be manually tested, what will be automated, document the tests, identify edge/corner cases and explore and learn about the product. If testers are sharing and documenting all the testing needs on the ‘left’ of the board, is there really a need for them to test on the ‘right’? These types of constrained tests can be done by anyone on the team.

I would argue that the need for testers to “QA” issues is not required.  This is not to say that they could never do so. There can be times where an issue is very complex, tightly integrated and risky where having a second set of eyes to review near the end is beneficial.  But these should be identified as part of the test strategy on the ‘left’ and planned for accordingly.

Today’s modern tester should be focused more on identifying the what and how to test, test coaching, test design/approach, documenting tests, identifying risks, domain/persona research and doing regular exploratory/adhoc testing.

As a tester, think of the value you should be providing your team(s). If you feel the effort of double checking or QA’ing is offering little value, then change the process. Identifying the what and how to test is often far more important than the testing itself. Focus more of your time and energy on testing activities that others will not be doing or thinking of doing!

Work on sharing that knowledge with your team and make quality a team thing.



Agile Testing (Simplified)

It has been a little while since I have blogged about anything testing related. I have moved into a BA role for some time now and have been a bit disconnected from the testing community at large. Testing is still a passion of mine, and I will continue to express my thoughts on testing related topics.

What I am still passionate about is agile testing and how testers deliver value to agile teams.  There are a lot of great articles and books out there on agile testing, and I encourage everyone to reach out and explore those.  This article is really for those new to testing, new to testing in agile, PO’s, Development Managers, BA’s or anyone who is seeking to understand some nuances in agile testing.

I am intentionally leaving out what many refer to as ‘test automation’ because that should spawn a whole separate discussion. The following topics are testing-focused only, and are techniques that illustrate how testing can deliver value in an agile context.

  • Shifting Testing Left
  • Early Story Adoption
  • Test Shaped Design
  • Dev Pairing and Test Mobbing
  • Just-In-Time Research
  • Sharing Test Ideas
  • Desk Checks
  • Coding To Support Development & Testing
  • Bug Introspection

Shifting Testing Left:

A hugely popular term in testing circles but what the heck does it really mean? Quite simply, it is getting testing involved as early in the design and development cycle as possible. With small, co-located, focused agile teams, there is no excuse to not have testers involved from the onset of projects. Testers should be there to question designs, share experiences, use their product and domain knowledge to challenge things from the very beginning. If you feel your testers are being left out – try to change that. Your team will benefit from their presence in early discussions.

Early Story Adoption:

What differentiates agile from your traditional waterfall approach, is testers being involved from the very beginning on a particular story and working through the story alongside developers. I am a big fan of not proceeding with any work on a story until testing has chimed in on their test analysis of the story in general, the approach they will be taking to test it, sharing the oracles, heuristics and artifacts that they care about.

Ideally, a tester should never be testing a story that they are not intimately aware of.

Test Shaped Design

As testers are often the feature experts on the team, they can have a unique ability to shape the design of features in development. While the business folks and UX may have their own ideas of how to implement a new feature, testers have a unique voice that should be heard. Not only are they feature experts but quite often have expertise in the domain and of course should be the loud voice of your user. Couple this with their overall product knowledge and how various features interact with each other can certainly change the course of a design. This is much easier to do in an agile context and is welcomed when it happens!

Dev Pairing and Test Mobbing

A huge advantage of being part of a small agile team is the ability to sit with others while work is going on, engaging each other, asking tons of questions, identifying ‘what if’ scenarios, sharing expertise in domain, features, db, code, personas, scenarios, the list goes on and on.

This is incredibly valuable in a dev/test pairing but is also incredibly valuable as a mob of testers. I have seen a ton of great work done by testers getting in a room to breakdown epics and stories and use a group mentality of shared experience to plan out how to approach their testing.

Gone are the days of developers throwing an item over the wall to an oblivious tester to test.  At least I hope so anyway!

Just-In-Time Research

One strategy of agile, is only preparing enough work for the team to work on in the near-term. This allows for agility and the ability to shift focus and go off in a different direction quickly and as cheaply as possible.  This is advantageous to testers, as they can focus their energy on a smaller subset of items instead of trying to digest a huge entire solution design concept document.  By limiting the WIP and the number of stories that are fleshed out, this allows a more narrow focus by testers to allow them to do their due diligence in researching the business problem and solution. They can also question things earlier to help catch potential problems before they happen.

Sharing Test Ideas

Share how you will be testing, share how you will be testing, share… Ok, I think you get it. Testers should share as much as they can with their teams about how they will be validating the story. This goes beyond your basic acceptance criteria and should also delve into functional, non-functional, explicit vs implicit requirements, scenarios, edge cases, boundary conditions, etc.  The more the team knows about ahead of time, the better off they’ll be and it should help avoid the churn that finding bugs later can cause.

Our teams have a section on their story template where testers share the list of test ideas that will be tested as part of the story. Testers come up with this list as part of their analysis of the story. The whole idea is to share this before development begins, to highlight as many testing details as possible. Being agile makes this easier to do.

Desk Checks

The ability to test during development on a dev environment on a development branch is the boss! Doing this with the developer present is even better. The shared understanding of the testing approach and the ability to find issues before code is committed is so much more efficient and less costly. The turnaround time is instantaneous and bugs can be fixed as you encounter them.

The mandate of testers in agile should be to embrace bug prevention as much as possible as opposed to finding them later on.

Coding To Support Development & Testing

Coding skills in testing are inherently valuable. I have blogged in the past that I believe traditional test automation should be the domain of developers (or a specialized role).  However, testers should have some coding ability to support their testing and their teams. SQL scripts to create test or demo data, code to setup test environments, queries to gauge performance impacts, or quick ways of navigating through workflows. Essentially anywhere that code can aid in speeding up the efficiency of their testing but also help developers that they are working hand-in-hand with. Agile is a team sport and this can only help your team become more successful.

Bug Introspection

You found a bug during testing after a developer has committed their code. YAY right?  Not so fast. While catching bugs is always a good thing (before a customer gets their mitts on it), what should always be done is to do some introspection on why a bug slipped through the cracks. Did I not share my test idea early on? Did we not test for this during the desk check? Were we too busy to have test involved earlier? These are the types of questions we need to ask ourselves and of the team. A huge advantage of working in agile, is the ability to learn from mistakes and correct them right away.

Testers should treat these as learning opportunities to try and prevent this same thing from happening again.

This is by no means a comprehensive list of agile testing ideas. These are ones that I find particularly valuable and important to our teams.

If anyone has other items to share please reach out.


The paradigm shift in preventing bugs vs. catching them later

The notion of preventing bugs vs. catching them later, is not a new idea. This methodology has been around since the dawn of time. But has it really been in practice through the years, within companies who employ dedicated software testers?

The notion of “shifting testing left”, is a popular expression nowadays, but what does this really mean and how does it impact teams and testers in the new agile world of software development? In particular, the context of trying to prevent bugs vs. trying to catch them later.

Alan Page had a great tweet awhile ago about this paradigm shift in the perception about testing finding bugs:

I love this quote and this really should be the thought patterns for teams now. What is needed for today’s testers and teams, is this mindset shift to help make this happen and to shed some misconceptions about testers and testing in general.

If I hearken back to the glory days of waterfall, one of the greatest values of a tester was their unique ability to catch and prevent bugs from escaping into production. Developers would complete their work and the issue would be assigned to a tester to ‘assure the quality’ of the work that was done. Testers would delve deep into the issue trying to find bugs at all costs. Testers would be lauded for their abilities, and in many cases promoted, by finding bugs before they escaped into production and ‘save the company’ from embarrassment and the perception of a lack of quality and attention to detail by their customers.

I remember trying to prove myself by finding and logging copious numbers of bugs. I remember performance reviews listing the number of bugs found and also getting raises and promotions to ‘senior’ or ‘lead’ based on this. I remember being a developer for a few years and being warned to try and avoid assigning your issues to certain testers, as they would catch more problems then some of the others and cause more headaches and rework.

Thank goodness times are a changin’!

One difficulty for management is justifying having dedicated testers. It was easier back then, because they could point to the reams of bugs found and print out reports and talk about it in meetings. It is much harder now, as teams are raising less bugs and in some cases no bugs at all, since they fix them right away and do not waste time with bug repositories and the wasteful churn that occurs when doing this.

So, how do we value our testers?

I’ve blogged in the past about the purpose of our testing:

… and also the value we should be providing our teams:

As managers, talk to your teams and find the value your testers are providing. Reward them for preventing issues from occurring, for helping deliver features that your customers love, for having meaningful quality and testing discussions prior to work starting, for pairing with developers early on, for bringing quality processes to your team that improve how you are building software.

As testers, don’t measure yourself with your ability to find bugs. We shouldn’t ever want to find any bugs. If you do, have discussions with your team about how to prevent the same thing from happening again and finding and fixing the gaps that allowed it to happen.

It can be rewarding to find bugs no doubt. But, share as much as you can about what your testing strategy is and how you are going to test. Don’t feel the need to keep that ‘ace’ in your back pocket to look clever later and find bugs.

Don’t be that guy/gal!



Struggling to understand Alpha, Beta, MVP? You are not alone!

Since my transformation to agile, continuous delivery and the concept of attempting to deliver value to customers quicker than ever before, I have often struggled with the concept of an Alpha, Beta or MVP (Minimum Viable Product).  What makes it an alpha/beta? Viable to whom?  What makes it minimum?  How is it possibly a product in this early, unfinished state?

I recently read a very good article by Henrik Kniberg (@henrikkniberg) where he tries to explain the concept of MVP as well:  I personally love his analogies to ‘Earliest Testable Product’, ‘Earliest Usable Product’ and best of all ‘Earliest Lovable Product’.  Although articles like this help in the understanding, it really does not help testers understand how to test them.

One of the major stumbling blocks for me as a tester is determining what to test, what not to test, and what to not worry about for now.  This often generates statements from the business like:  “It’s good enough for now”, “We’ll cross that bridge when we come to it”, “Let’s just wait until we get feedback”, “Um yeah that’s a problem but…”.

When my company switched to agile teams, my team was one of the first to build a separate component feature that was going to get into the hands of the customer early and often for feedback.  There were many challenges with this obviously. The toughest part for me (and my team) was understanding how to test this thing and what to care about from a quality perspective. The first time our PO said that we were going to ship the product to them at the end of the next sprint, I went into typical tester panic-mode!  Drop everything else, try and test everything, report everything I find, document everything, etc.  To encounter blase feedback to issues I was finding was disheartening and challenging to understand.  How can we deliver a product with all these incomplete pieces, with all these known issues (many not even raised) but more importantly – how can I help the team make informed decisions about the level of quality and what is important for the team to know about prior to shipping?

Examples of many of the things I found were:  design wire frames did not match what we had, use cases were incomplete and workflows were not executable, usability issues, functionality bugs, performance and reliability issues, data validation and error handling not done and the list goes on and on.

I tried my best to model my testing to represent the product in its current state of development, so I always knew what it was supposed to look like and behave like at any given point in time.  I was doing this well. But, how do I handle all the issues I was finding and how do I wrap my head around the concept and idea that the product is ‘good enough for now?’  As a tester, I found this very difficult.

I still do not have all the answers and always welcome feedback and ideas!  Below are some items that I do that help:

  • Document known issues (not necessarily as raised bugs) and keep them around for discussion purposes with the team.
  • Continually provide feedback to the team as to your own perceived level of quality – find ways of doing this effectively.
  • Get the whole team involved in traditional testing activities (including the PO). You would be surprised how quickly issues can get fixed when others see how annoying they are too!
  • Get the UX/UI designers involved in the testing of design and usability issues. This can help in making the call that ‘this is good enough for now’. Pairing is an awesome way to do this.
  • Functional workflows – inform your team about the inability of users to perform specific actions in your product. It may not quite be as MVP as they originally thought.
  • Create a mechanism to ‘talk testing’ and not just about bugs. Often it is hard to talk about testing, the challenges you encounter, the risks, and the health of your product.
  • Disassociate yourself from the notion of trying to test everything! It cannot be done nor does it need to be done. Continually have discussions with your team about what are the most important things to test today, tomorrow and as time goes on.
  • Instead of simply reporting problems, inform the team about the impacts these problems have. These are much better discussions to have.

Most importantly, don’t sweat it so much!  It doesn’t have to be perfect.  It is early, expectations are lower, and there should be an understanding amongst the team, the customer and the business that this is an iterative approach to development and delivery.  Your team will get there.  The customer is getting involved earlier in the process and incomplete or not, this is a way better way of delivering value to your customers.






Am I really a valuable member of my team?

I’ve been sitting around with a broken ankle and it has given me an opportunity to catch up on social media, twitter, blogs and articles (I wish it was a cool story like ‘hey guys, hold my beer, watch this…’ but sadly it was not).

There have been many tweets going around lately about the value, or more specifically, the lack of value, of having dedicated testers on teams. Not a new thread but a popular one nonetheless.

Testing is hard. Talking about testing is hard. Showing teams the value of testing is hard. We need to be better at sharing this knowledge and having stakeholders rely on us for vital aspects of the software development process.

If you feel as a tester, that your value is not appreciated, that you are deemed to be insignificant or can be easily replaced, then shame on you.  Perhaps take a look in the mirror and question yourself about the value you are adding to your team.

In Agile, ideally we want to have these homogeneous teams with heterogeneous capabilities. We want everyone to have diverse skill sets with the ability to jump in where needed and get the job done. I believe that there is still a need for specializations however. Testing is one of those specializations. On the surface it appears easy and replaceable, but when really analyzed, the value is tremendous.

Here are some ideas for the value you should be providing. This list is by no means complete. Think to yourself – am I doing all this stuff?  If not, does my team really value me and my contributions and am I doing all I can to make my team succeed?

Product and Feature Knowledge:

If your team does not come to you to ask questions about the products and features, information about the users or personas, the functionality, or use cases or workflows – then you are failing.  As a tester, you have a unique ability to not be bogged down by technology, code, and various frameworks that developers have to deal with. Take the time to dig deep. You should be the expert, no one should know your products and features better then you!

Domain Knowledge:

Many think domain knowledge is solely the primary focus of the PO or Business Analyst. I think it should also be the tester. Who is better suited on your team to represent the users than you? If you do not understand your users day to day job, how they will interact with the system, the industry they work in and how we are trying to help them, then you are failing. Learn about the industry, subscribe to magazines, bookmark industry specific websites. Take some scheduled time every week or month to learn more. It will help you question designs better, write better acceptance criteria and tests, design better scenarios, but more importantly how you can help your user base do their job better and more efficiently!

Installations and Configurations:

Since you install, uninstall, and configure your applications hundreds or thousands of times, you are the teams expert in this field. Since most applications today can be configured many different ways, you should be invaluable to your teams. Learn it, understand it, dig deep, and experiment!

Test Strategy:

Everything you test should have a test strategy applied to it. Do not follow a templated approach and apply that to everything you test. Context of what you are building is key with testing and this is where a skilled tester can really shine. Otherwise, teams can tend to follow the same iterative approach to testing everything and not be doing the best they can.

Quality Focus:

Everyone wants to think of quality and have it be foremost in everyone’s mind. But no matter what anyone says, it is not.  But it should be for you!  Your job is to keep everyone on track, thinking of quality first, how you can help in this area and make your team be successful and deliver high quality features. Always be thinking of quality initiatives you can introduce to help your team.

Thinking Beyond Functionality:

This is where many think testing is easy. All we have to do is test the acceptance criteria, execute a simple test or scenario and close off the story. There is way more to testing a story then that. Educate yourself and your teams on test modelling, design, exploratory testing, assessing risk, identifying and testing quality criteria, boundary and edge cases, etc. These should all be thought of before closing off your stories. You are the tester who should be quality focused – guide your teams and educate them as to what testing is really all about!

What To Test and When:

Testing is expensive, 100 percent code coverage is impossible. We cannot test everything. You need to use your experience and knowledge to test what matters the most and utilize the time you have to do the best you can. When tasked with ‘We need to test feature X’, education and communication is important to do this well.  Be smart in how you test, what you test, when you test and articulate your testing and decision making to your team. Testing is a thinking activity and should not only be relegated to running a series of automated checks and executing some overly prescriptive test cases.

Exploratory Testing:

To be clear this is not adhoc testing people!  This does not mean ‘play around with it for a few hours and see if you find any problems’.  I am not going to get into what exploratory testing is all about. If you don’t really know, then you are failing as a tester. This is what testing is all about, not running through a test plan and test cases. Learn it, share how to do it with your teams and think beyond the requirements.


You should always be risk averse.  You are your teams risk analyst. Always monitor what new functionality is being added, what bug fixes are being applied, and what change requests are being implemented. Your teams will only care if knowledge they have coded before still passes (automated checks). You are there to think deeper, react to change and use your skills and training to uncover other potential problems.

Applying Oracles and Heuristics:

How we do know what to test and how do we measure the success or failure?  This is where the use of oracles and heuristics come into play.  If you do not know what these are, how to use them as part of your test strategy, then learn about them and do it.  It will change your testing forever.

Test Artifacts:

Gone are the days of the long test plans containing reams of prescribed test cases (or hopefully we are getting there). We need to be smarter about how we document testing for the future. Again, context is key here big time.  If something is overly complex and tightly integrated with specific data and results (like a report) we need to document that differently then we would a page where we enter customer details.  Document accordingly with just enough information required. Use previous testing to help guide you. Smart test documentation is key and a vital component to what we do.

Product / Feature Health:

Continual monitoring of the health of your products and features is important. Come up with ways to show this to your teams and help them make quality-based decisions. You would be surprised how valuable this can really be!

Targeted Release and Regression Testing (not checking):

You have tons of automation (unittest, ui, api), congrats, job well done. These are very important and key to your teams success.  We cannot manually test everything all the time.  But remember, this is not testing, this is just checking that everything still works as expected based on the knowledge you had at the time. Targeted release and regression testing should still be done. Products change, code changes, and impacts to functionality changes over time.  Be proactive, and smart with your testing to target things that your automation may not cover or more importantly – does not know about!

What To Automate:

You are the key here.  We want to limit the amount of automation we have.  We want the automation to be targeted and not have duplication.  This requires a lot of analysis, planning, design and execution.  Start small, isolate what is important to automate first. Take the lead, find out what your devs can do, find out what you can do, decide as a team if you are happy with the combined coverage.  Be smart about automation and you will be very happy you have it!

I could list off another dozen items, but for the sake of brevity, I am capping the number for the purposes of this blog. I am not saying that any of these cannot be done by other team members.  Other team members can certainly do any of these, but they will not be top priority in their day to day activities.

They should be for you!





Just let me test the s**t out of it – my public rant!

With the advent of agile, lean, scrum/kanban and the like, the more I keep hearing things like: “We need to automate everything”, or “Anyone on the team can test”, “Unittests are good enough”, or my personal favorite “We don’t have time to manually test”. This reminds me of a phrase I once uttered in a meeting a long time ago, which i hope catches on:

“Just let me test the s**t out of it!”

So, what am I getting at here and why am I getting so hot and bothered?

With the proliferation of agile and it’s various derivations, seems to come this unhealthy view of testing. Anyone can do it, we need to automate everything, we are building so rapidly and iteratively that we can’t possibly manually regression test anymore, developer tests like unittests are good enough, and the like. So, where are these ideas coming from? These pervasive ideas are being touted online, in blogs/articles and in meeting rooms and boardrooms.

The idea that we can ‘automate everything’ is just plain silly. Automation is a great asset if it is used wisely. It should be designed and implemented to ‘aide’ in your testing by helping to alleviate pain points due to: the time it takes to execute reproducible scenarios, repetitive tasks, setting up and tearing down of test environments, creation of test data and any other regression checks that you may want in place. The idea that automation should be the only testing required or a replacement for actual testing, is ridiculous.

“Just let me test the s**t out of it!”

So, anyone can test eh? Anyone can do almost anything in life. It is how well you do it, and how long it would take you that matters. To use broad brush statements like that are harmful and just untrue. Hey look at me, I wrote an installation wiki page, so now I’m a Technical Writer. Hey look at me, I wrote a script to populate a table with records to use as test data, so now I’m a Developer. Hey look at me, I wrote a couple user stories, so now I’m a Product Owner. Hey look at me, I wrote out some requirements, so now I’m a Business Analyst. When I say examples like these, they seem silly right? Well so is the idea that anyone is a Software Tester. Do not diminish peoples skill sets, experience or training, and their dedication to the art and craft of testing.

“Just let me test the s**t out of it!”

Developer testing is good enough. I have heard this many times over my career and doing a quick Google search, you’ll see this all over the place. It is very important, there is no question (even more so than QA testing). The more we work hand in hand and alongside developers, the more quality is built into products. To say that dedicated testing is not required or needed, in some circumstances may actually be the case. But to say this all the time, is again not healthy. Dedicated testers provide inherent value: applying appropriate testing techniques, strategies, tools, training, the ability to use oracles/heuristics, the ability to adapt quickly to the context of what is being built, an unbiased view of the product/code, adopting a user mentality, the ability to divest themselves away from the technology, a second set of eyes, I could go and on. If you are in doubt of the value of your testers or test team, engage one of your senior testing leaders, talk to them, learn and most of all, listen!

“Just let me test the s**t out of it!”

We don’t have time to manually test or manually regression check anything. Say what? This is the most nonsensical thing I’ve heard in my many years of testing. However, we can devote endless hours of testers time to execute, analyze, debug, rework, and write automated checks (which provide suspect value in the first place in a lot of cases) but we have no time to manually test. As Tony Soprano would have said “Get the f**k outta here!’  Exploratory Testing is a way better alternative if you only have limited time. I highly encourage learning more about ET and its value. There is also the debate of testing vs. checking. As James Bach and Michael Bolton espouse all the time, there is a significant difference between checking vs. testing. For more information on this, a good resource is James Bach’s blog, an example blog post is:

Scripted testing (which can be automated or manual), the use of automated checks and exploratory testing all have it’s place but should never be relied on solely – if at all possible.

Give me some time, armed with my own test strategy/plan, some test oracles/heuristics, my knowledge of implicit and explicit requirements, test ideas and checklists, and I’ll show you the value of manual testing and it’s necessity!

“Just let me test the s**t out of it!”

Ok, enough of this rant. Hopefully after reading this blog post, you take something away from it… otherwise my rant was ineffective and darn it I just lost valuable manual testing time.


Transitioning to agile – a journey worth taking

Like many testers in our industry, I was dragged kicking and screaming into Agile. I had heard horror stories of how it does not work. There are no specs to follow, development is done by the seat of your pants and everything is done all willy-nilly.

Our R&D organization went through a major re-orging period. It was decided that we would restructure ourselves after the Spotify model. We would now have tribes, squads, chapters, and we would have everyone become specialists in their particular domain and technology areas. Needless to say this transition was scary, confusing, difficult, worrisome and most of all unknown.

We were now going to become Agile.

agile manifesto

Agile eh? Less focus on process and tools? But we have so much of that!  Less focus on documentation?  But we do so much of that and what we create is awesome! Less stringent focus on contracts and plans but but…

Our R&D organization was the epitome of what a waterfall development environment was. We were the Niagara Falls or the Victoria Falls. We were the poster child.


We worked in gates, each stage (gate) of the SDLC was performed, signed off on, and then transitioning to the next stage with a very heavy testing phase as one of the last gates in our process. How could we change? We were very successful, delivered high quality products, were rapidly growing and establishing ourselves as the world leader in our field. What were we thinking?

From a testing perspective, the thought of becoming absorbed in cross-functional teams, potentially losing managers, wearing many hats, changing the way we work and think, and having doubts about what our roles were going forward, created doubt and reluctance.

The change has been a positive one and was one of the best moves our company has made. It is not without it’s challenges however. Read on as I extrapolate my views and thoughts of the transition. I will highlight the goods and the bads and give an overall summary of my thoughts and experiences.

The Goods!


Instead of employees spreading themselves too thin and being jacks of all trades but masters of none, specialization helps you truly understand your products (your ability to design, develop, test, release and support them). Prior to our switch, we often found ourselves having to develop and test products using tools and technologies that we had little expertise with. Specializing, allows us to dig deeper, develop better and more importantly, understand the products, the users and of course – test better!

Freedom to change process

Along with the change came autonomy. We now were the masters of our own domain (Seinfeld anyone?). We were no longer bogged down by company-wide imposed processes that may or may not fit what you are doing. If something doesn’t work or is not a fit, then scrap it and do something else. These kinds of change were impossible before. Context is key in testing. This allows strong, professional, thinking testers to adapt to the context of what they are testing and use the appropriate methods, tools, frameworks and the like to test better!

Sense of accomplishment


I have two dogs and I know the feeling of excitement they get whenever they get a bone or a treat. Work is no different. If you want happy, engaged, enthusiastic testers, you need to throw them a bone every once in a while. I’m not only talking about more money, I’m talking about… sense of accomplishment. Agile gets testers engaged! They are there from the beginning, contributing and making a difference. They help shape the design, development, the testing strategies, analyze and manage risk, help keep everyone on task, and working on things they may not normally work on or contribute to. In traditional waterfall, you are drones, working in isolation, segregated in many respects from decision-making and shaping of products.

The testers in our organization are clearly happier now. They feel more valuable and most importantly, feel like they are integral pieces in the successes that we continue to have!

Varied work

Can anyone say less boredom? Agile allows you more freedom to explore new technologies and ways of doing things, allows you to work with different people, allows you to take on challenges and see projects through from beginning to end and beyond. You work in a smaller more iterative nature. If you raise a bug, it gets fixed right away. It doesn’t go off into never never land to maybe get looked at one day. One day you may be writing, the next day exploratory testing, the next day doing automation, the next day project planning, the next day… who knows? I’ve always been a big proponent of ‘test what makes sense today’. Agile allows you to do this, and not get bogged down by process or wasteful tasks.

The Bads!

Unfortunately, we can’t always view everything through rose colored glasses:

Awesome tune, but of course with everything, there are challenges. The following are issues I’ve found that we are working through:


Along with smaller, dedicated teams, comes restrictive vision. “We are building our little piece of the puzzle and that is all that matters”. Actually, no it is not! We need to continually interact with other groups, share information, and think as one company with the same vision. Moving from large groups to small groups enables this disconnect. It is up to everyone to keep those lines of communication open!

Not my problem it’s yours

Along with becoming more siloed, comes behaviors like ‘Sorry, that’s not my area’ or ‘I don’t have time to help you, I’m too busy doing what I’m doing’. Prior to the re-organization, ALL testing was everyone’s responsibility. Everyone had their own work to do, but also worked in a shared pool of activities when needed. The ability to shift focus was much easier before. Now, everyone is busy in their own worlds, and it is human nature to occasionally shuffle off responsibilities or challenges to others. Particularly if it is not your area of responsibility.

This is certainly something we need to get better with. Open communication, helping a friend out, keeping your eye on the whole company vision, and mostly, thinking of the quality of all products and features, as a whole team approach.

Wearing Many Hats

Sometimes we look good when we wear hats (women), sometimes we look silly (men). Depends on the context and just how silly the hat really looks. The idea of everyone in a small cross-functional team being able to do any job or task is idealistic. But is it reality? To a certain extent sure. Anyone can do almost anything in life. It is how well you do it, and how long it would take you that matters. I could climb a mountain, knit a sweater, fix a car, chug a beer, write a software program, write technical documents, etc. The question is how well and timely can I do them.

The answer to all of those in today’s speak would be ‘Meh’! Although, after all this time, you’d think I would be good at chugging a beer!?

We all have our specializations and strengths. We’ve gotten these through school, work, and personal life experiences. Let us continue to utilize our unique skills and let those better capable of doing the rest – do the rest! If at all possible.


I feel it has been successful move. We are delivering faster with higher-quality products than ever before. If we keep up with the goods and work a bit on the bads, then the sky is the limit for our potential. Quality is not just owned by testing, it is a whole-team approach.

Our whole team is owning it!

Release Testing – to do or not to do, that is the question

Release testing and releasing your product, are like peanut butter with jam, eggs with bacon, or beer with nuts. They kinda just go together. Even though you can have one by itself, we always tend to associate them together. But do they have to be?

Here are some questions you should ask yourself before blindly doing release testing and spending inordinate amounts of time, to determine whether a test is worth executing:

1. When was it executed last?
2. What has changed since then?
3. Is it manual or automated?
4. How long does it take to execute?
5. Has it ever failed before?
6. Do many of our users use this feature or not?
7. Do we have the expertise available for executing and analyzing the test?
8. What capacity do we have?
9. Is the test still valid?

Let’s take each of the questions and break them down:

1. When was it executed last – If this test was executed 2 weeks prior to your release candidate build being created, is there a lot of value in re-executing again? Maybe, maybe not…
2. What has changed since then – In an area that has had little to no new development or bug fixes since the last time it was executed – do you really need to run it?
3. Is it manual or automated – If manual, should you invest a full day of a testers time to execute this test? If automated, do we have resources available to handle failures, unexpected change, or script updates that you do find?
4. How long does it take to execute – In general terms, is it worth the time spent to do it?
5. Has it ever failed before – Alas, the tests that get run over and over and over and over and always PASS. Stop doing these if manual. Exploratory testing may be a way better alternative!
6. Do many of our users use this feature or not – If no one uses it, stop it. If very few use it, think about stopping it.
7. Do we have the expertise available for performing and analyzing the test – Testers are not quality gatekeepers. If you are not knowledgeable in the area, the technology or the product, you should not be doing release testing for this feature. Find someone who is more capable, tester or not.
8. What capacity do we have – Capacity planning should always be factored in prior to doing release testing activities. How many testers do I have, what do they know, how long do we have to test, what else is everyone working on? These should always drive your testing effort.
9. Is the test still valid – The curse of the out-dated testing documentation. If the test has not been modified in a long time, question it’s validity or appropriateness.

Performing testing activities should never be done just because it is best-practice or what you’ve always done. I’m a big proponent of testing what makes sense at the time, and what is or could be the biggest risk(s). Release testing does help give people the warm and fuzzy feeling that there is a certain level of quality in your product which is nice. I’m not saying to do it or not to do it, just do what makes sense with the time and resources you have and factoring in the potential risks.

The dichotomy of building software and the actual users who use it

@PaulHolland_TWN tweeted a statement by @michaelbolton that said: “One of the things that gets in the way of effective testing is the distance between the testers and the users”

My reply to him was: “Anyone who builds/tests any artifact that users care about, should have more interaction with them”

This has been a sticking point for me for a long time.

Many developers and testers are doing a great job delivering fantastic software for their users. They’re building just what their users want… or are they?

Many developers and testers have little to no interactions with their user communities. How do we really know what we are building is what our customers really want or need?

Is it:
1. Because the developer who is building it said so?
2. Because your manager said so?
3. Because the product management team said so?
4. Because the requirements said so?
5. Because the use case, user story, business workflow said so?
6. Because someone talked to some user somewhere and wrote some stuff down on a piece of paper and then wrote more pieces of paper with details on it and…

There is a significant disconnect in the industry, where the majority of those developing software are kept separate from their users. The old guard still exists – they send in the swat team to gather requirements, come up with use cases, business workflows, best practices in the industry, and write up a whole bunch of documents, hand them off to the R&D team and have them build it, validate it and ship it!

One significant problem… who is our user, how are we building what they really want and how can we validate it properly?

One of the challenges of testing, is trying to portray the persona of a real user of your system. If one has never met one, talked to one, has never seen how they interact with the system, how they modify it to suit them, how it changes the way they do their work (for the good or the bad) and how it interacts with other systems and programs they use, how can we accurately represent them in our testing – especially from a user experience perspective?

In some cases, we have someone who represents the user and conveys user intent and behavior. This reminds me of a game you play as a child, where you sit in a circle and whisper something, then they whisper it to the next child, and so on and so on, then it gets to the last person and it does not resemble the starting point at all. A lot can be lost in translation along the way.

Oftentimes, all we have to go off of is documentation, our own experience, what someone else tells us, our own understanding of the domain we are working in, and workflows that have been preset for our consumption.

Let’s try and change things. Let’s get involved with our users, talk to them, participate in on-site visits, see your product in action! Only then can we really, honestly, say…

“Yes, we are delivering high quality products to our users that they really want and need!”

The Testing Mantra – OQI

I posted a tweet today about what I considered to be the Testing Mantra – Observe, Question, Inform (OQI). Everything we do falls into these 3 categories in one way or another.

The other day, a friend of a friend asked me what I did for a living. I said I was a software tester. He asked me, ‘So what does that really mean and what do you really do?’ As he was not in the IT field, I asked him what he thought it meant. He said, ‘Oh, I guess you try and find bugs and try and break things right?’

This is a common misconception that many make when they hear that you are a software tester. This viewpoint is one of the common reasons other job titles have been spawned like ‘QA Engineer’, ‘Quality Assurance Analyst’, and ‘Test Engineer’, to try and professionalize the job. Finding bugs and breaking software, are causal effects of portions of testing, not the objective of testing.

I responded to him that what I really do is: ‘Observe products, systems, components, code and documentation, question what I find, and inform stakeholders and interested parties of my findings.’

Without getting into technical terms regarding tools, technology or process, I think I summarized it well what I do.

I think part of the challenge of professionalizing testing, is not to try and come up with bogus certifications and training, but in educating people on what it really means to be a tester!

Spread the mantra – OQI!