Am I really a valuable member of my team?

I’ve been sitting around with a broken ankle and it has given me an opportunity to catch up on social media, twitter, blogs and articles (I wish it was a cool story like ‘hey guys, hold my beer, watch this…’ but sadly it was not).

There have been many tweets going around lately about the value, or more specifically, the lack of value, of having dedicated testers on teams. Not a new thread but a popular one nonetheless.

Testing is hard. Talking about testing is hard. Showing teams the value of testing is hard. We need to be better at sharing this knowledge and having stakeholders rely on us for vital aspects of the software development process.

If you feel as a tester, that your value is not appreciated, that you are deemed to be insignificant or can be easily replaced, then shame on you.  Perhaps take a look in the mirror and question yourself about the value you are adding to your team.

In Agile, ideally we want to have these homogeneous teams with heterogeneous capabilities. We want everyone to have diverse skill sets with the ability to jump in where needed and get the job done. I believe that there is still a need for specializations however. Testing is one of those specializations. On the surface it appears easy and replaceable, but when really analyzed, the value is tremendous.

Here are some ideas for the value you should be providing. This list is by no means complete. Think to yourself – am I doing all this stuff?  If not, does my team really value me and my contributions and am I doing all I can to make my team succeed?

Product and Feature Knowledge:

If your team does not come to you to ask questions about the products and features, information about the users or personas, the functionality, or use cases or workflows – then you are failing.  As a tester, you have a unique ability to not be bogged down by technology, code, and various frameworks that developers have to deal with. Take the time to dig deep. You should be the expert, no one should know your products and features better then you!

Domain Knowledge:

Many think domain knowledge is solely the primary focus of the PO or Business Analyst. I think it should also be the tester. Who is better suited on your team to represent the users than you? If you do not understand your users day to day job, how they will interact with the system, the industry they work in and how we are trying to help them, then you are failing. Learn about the industry, subscribe to magazines, bookmark industry specific websites. Take some scheduled time every week or month to learn more. It will help you question designs better, write better acceptance criteria and tests, design better scenarios, but more importantly how you can help your user base do their job better and more efficiently!

Installations and Configurations:

Since you install, uninstall, and configure your applications hundreds or thousands of times, you are the teams expert in this field. Since most applications today can be configured many different ways, you should be invaluable to your teams. Learn it, understand it, dig deep, and experiment!

Test Strategy:

Everything you test should have a test strategy applied to it. Do not follow a templated approach and apply that to everything you test. Context of what you are building is key with testing and this is where a skilled tester can really shine. Otherwise, teams can tend to follow the same iterative approach to testing everything and not be doing the best they can.

Quality Focus:

Everyone wants to think of quality and have it be foremost in everyone’s mind. But no matter what anyone says, it is not.  But it should be for you!  Your job is to keep everyone on track, thinking of quality first, how you can help in this area and make your team be successful and deliver high quality features. Always be thinking of quality initiatives you can introduce to help your team.

Thinking Beyond Functionality:

This is where many think testing is easy. All we have to do is test the acceptance criteria, execute a simple test or scenario and close off the story. There is way more to testing a story then that. Educate yourself and your teams on test modelling, design, exploratory testing, assessing risk, identifying and testing quality criteria, boundary and edge cases, etc. These should all be thought of before closing off your stories. You are the tester who should be quality focused – guide your teams and educate them as to what testing is really all about!

What To Test and When:

Testing is expensive, 100 percent code coverage is impossible. We cannot test everything. You need to use your experience and knowledge to test what matters the most and utilize the time you have to do the best you can. When tasked with ‘We need to test feature X’, education and communication is important to do this well.  Be smart in how you test, what you test, when you test and articulate your testing and decision making to your team. Testing is a thinking activity and should not only be relegated to running a series of automated checks and executing some overly prescriptive test cases.

Exploratory Testing:

To be clear this is not adhoc testing people!  This does not mean ‘play around with it for a few hours and see if you find any problems’.  I am not going to get into what exploratory testing is all about. If you don’t really know, then you are failing as a tester. This is what testing is all about, not running through a test plan and test cases. Learn it, share how to do it with your teams and think beyond the requirements.

Risk:

You should always be risk averse.  You are your teams risk analyst. Always monitor what new functionality is being added, what bug fixes are being applied, and what change requests are being implemented. Your teams will only care if knowledge they have coded before still passes (automated checks). You are there to think deeper, react to change and use your skills and training to uncover other potential problems.

Applying Oracles and Heuristics:

How we do know what to test and how do we measure the success or failure?  This is where the use of oracles and heuristics come into play.  If you do not know what these are, how to use them as part of your test strategy, then learn about them and do it.  It will change your testing forever.

Test Artifacts:

Gone are the days of the long test plans containing reams of prescribed test cases (or hopefully we are getting there). We need to be smarter about how we document testing for the future. Again, context is key here big time.  If something is overly complex and tightly integrated with specific data and results (like a report) we need to document that differently then we would a page where we enter customer details.  Document accordingly with just enough information required. Use previous testing to help guide you. Smart test documentation is key and a vital component to what we do.

Product / Feature Health:

Continual monitoring of the health of your products and features is important. Come up with ways to show this to your teams and help them make quality-based decisions. You would be surprised how valuable this can really be!

Targeted Release and Regression Testing (not checking):

You have tons of automation (unittest, ui, api), congrats, job well done. These are very important and key to your teams success.  We cannot manually test everything all the time.  But remember, this is not testing, this is just checking that everything still works as expected based on the knowledge you had at the time. Targeted release and regression testing should still be done. Products change, code changes, and impacts to functionality changes over time.  Be proactive, and smart with your testing to target things that your automation may not cover or more importantly – does not know about!

What To Automate:

You are the key here.  We want to limit the amount of automation we have.  We want the automation to be targeted and not have duplication.  This requires a lot of analysis, planning, design and execution.  Start small, isolate what is important to automate first. Take the lead, find out what your devs can do, find out what you can do, decide as a team if you are happy with the combined coverage.  Be smart about automation and you will be very happy you have it!

I could list off another dozen items, but for the sake of brevity, I am capping the number for the purposes of this blog. I am not saying that any of these cannot be done by other team members.  Other team members can certainly do any of these, but they will not be top priority in their day to day activities.

They should be for you!

 

 

 

 

Just let me test the s**t out of it – my public rant!

With the advent of agile, lean, scrum/kanban and the like, the more I keep hearing things like: “We need to automate everything”, or “Anyone on the team can test”, “Unittests are good enough”, or my personal favorite “We don’t have time to manually test”. This reminds me of a phrase I once uttered in a meeting a long time ago, which i hope catches on:

“Just let me test the s**t out of it!”

So, what am I getting at here and why am I getting so hot and bothered?

With the proliferation of agile and it’s various derivations, seems to come this unhealthy view of testing. Anyone can do it, we need to automate everything, we are building so rapidly and iteratively that we can’t possibly manually regression test anymore, developer tests like unittests are good enough, and the like. So, where are these ideas coming from? These pervasive ideas are being touted online, in blogs/articles and in meeting rooms and boardrooms.

The idea that we can ‘automate everything’ is just plain silly. Automation is a great asset if it is used wisely. It should be designed and implemented to ‘aide’ in your testing by helping to alleviate pain points due to: the time it takes to execute reproducible scenarios, repetitive tasks, setting up and tearing down of test environments, creation of test data and any other regression checks that you may want in place. The idea that automation should be the only testing required or a replacement for actual testing, is ridiculous.

“Just let me test the s**t out of it!”

So, anyone can test eh? Anyone can do almost anything in life. It is how well you do it, and how long it would take you that matters. To use broad brush statements like that are harmful and just untrue. Hey look at me, I wrote an installation wiki page, so now I’m a Technical Writer. Hey look at me, I wrote a script to populate a table with records to use as test data, so now I’m a Developer. Hey look at me, I wrote a couple user stories, so now I’m a Product Owner. Hey look at me, I wrote out some requirements, so now I’m a Business Analyst. When I say examples like these, they seem silly right? Well so is the idea that anyone is a Software Tester. Do not diminish peoples skill sets, experience or training, and their dedication to the art and craft of testing.

“Just let me test the s**t out of it!”

Developer testing is good enough. I have heard this many times over my career and doing a quick Google search, you’ll see this all over the place. It is very important, there is no question (even more so than QA testing). The more we work hand in hand and alongside developers, the more quality is built into products. To say that dedicated testing is not required or needed, in some circumstances may actually be the case. But to say this all the time, is again not healthy. Dedicated testers provide inherent value: applying appropriate testing techniques, strategies, tools, training, the ability to use oracles/heuristics, the ability to adapt quickly to the context of what is being built, an unbiased view of the product/code, adopting a user mentality, the ability to divest themselves away from the technology, a second set of eyes, I could go and on. If you are in doubt of the value of your testers or test team, engage one of your senior testing leaders, talk to them, learn and most of all, listen!

“Just let me test the s**t out of it!”

We don’t have time to manually test or manually regression check anything. Say what? This is the most nonsensical thing I’ve heard in my many years of testing. However, we can devote endless hours of testers time to execute, analyze, debug, rework, and write automated checks (which provide suspect value in the first place in a lot of cases) but we have no time to manually test. As Tony Soprano would have said “Get the f**k outta here!’  Exploratory Testing is a way better alternative if you only have limited time. I highly encourage learning more about ET and its value. There is also the debate of testing vs. checking. As James Bach and Michael Bolton espouse all the time, there is a significant difference between checking vs. testing. For more information on this, a good resource is James Bach’s blog, an example blog post is: http://www.satisfice.com/blog/archives/856.

Scripted testing (which can be automated or manual), the use of automated checks and exploratory testing all have it’s place but should never be relied on solely – if at all possible.

Give me some time, armed with my own test strategy/plan, some test oracles/heuristics, my knowledge of implicit and explicit requirements, test ideas and checklists, and I’ll show you the value of manual testing and it’s necessity!

“Just let me test the s**t out of it!”

Ok, enough of this rant. Hopefully after reading this blog post, you take something away from it… otherwise my rant was ineffective and darn it I just lost valuable manual testing time.

🙂

Transitioning to agile – a journey worth taking

Like many testers in our industry, I was dragged kicking and screaming into Agile. I had heard horror stories of how it does not work. There are no specs to follow, development is done by the seat of your pants and everything is done all willy-nilly.

Our R&D organization went through a major re-orging period. It was decided that we would restructure ourselves after the Spotify model. We would now have tribes, squads, chapters, and we would have everyone become specialists in their particular domain and technology areas. Needless to say this transition was scary, confusing, difficult, worrisome and most of all unknown.

We were now going to become Agile.

agile manifesto

Agile eh? Less focus on process and tools? But we have so much of that!  Less focus on documentation?  But we do so much of that and what we create is awesome! Less stringent focus on contracts and plans but but…

Our R&D organization was the epitome of what a waterfall development environment was. We were the Niagara Falls or the Victoria Falls. We were the poster child.

niagara_falls

We worked in gates, each stage (gate) of the SDLC was performed, signed off on, and then transitioning to the next stage with a very heavy testing phase as one of the last gates in our process. How could we change? We were very successful, delivered high quality products, were rapidly growing and establishing ourselves as the world leader in our field. What were we thinking?

From a testing perspective, the thought of becoming absorbed in cross-functional teams, potentially losing managers, wearing many hats, changing the way we work and think, and having doubts about what our roles were going forward, created doubt and reluctance.

The change has been a positive one and was one of the best moves our company has made. It is not without it’s challenges however. Read on as I extrapolate my views and thoughts of the transition. I will highlight the goods and the bads and give an overall summary of my thoughts and experiences.

The Goods!

Specialization

Instead of employees spreading themselves too thin and being jacks of all trades but masters of none, specialization helps you truly understand your products (your ability to design, develop, test, release and support them). Prior to our switch, we often found ourselves having to develop and test products using tools and technologies that we had little expertise with. Specializing, allows us to dig deeper, develop better and more importantly, understand the products, the users and of course – test better!

Freedom to change process

Along with the change came autonomy. We now were the masters of our own domain (Seinfeld anyone?). We were no longer bogged down by company-wide imposed processes that may or may not fit what you are doing. If something doesn’t work or is not a fit, then scrap it and do something else. These kinds of change were impossible before. Context is key in testing. This allows strong, professional, thinking testers to adapt to the context of what they are testing and use the appropriate methods, tools, frameworks and the like to test better!

Sense of accomplishment

dogWithBigBone

I have two dogs and I know the feeling of excitement they get whenever they get a bone or a treat. Work is no different. If you want happy, engaged, enthusiastic testers, you need to throw them a bone every once in a while. I’m not only talking about more money, I’m talking about… sense of accomplishment. Agile gets testers engaged! They are there from the beginning, contributing and making a difference. They help shape the design, development, the testing strategies, analyze and manage risk, help keep everyone on task, and working on things they may not normally work on or contribute to. In traditional waterfall, you are drones, working in isolation, segregated in many respects from decision-making and shaping of products.

The testers in our organization are clearly happier now. They feel more valuable and most importantly, feel like they are integral pieces in the successes that we continue to have!

Varied work

Can anyone say less boredom? Agile allows you more freedom to explore new technologies and ways of doing things, allows you to work with different people, allows you to take on challenges and see projects through from beginning to end and beyond. You work in a smaller more iterative nature. If you raise a bug, it gets fixed right away. It doesn’t go off into never never land to maybe get looked at one day. One day you may be writing, the next day exploratory testing, the next day doing automation, the next day project planning, the next day… who knows? I’ve always been a big proponent of ‘test what makes sense today’. Agile allows you to do this, and not get bogged down by process or wasteful tasks.

The Bads!

Unfortunately, we can’t always view everything through rose colored glasses:

Awesome tune, but of course with everything, there are challenges. The following are issues I’ve found that we are working through:

Silos

Along with smaller, dedicated teams, comes restrictive vision. “We are building our little piece of the puzzle and that is all that matters”. Actually, no it is not! We need to continually interact with other groups, share information, and think as one company with the same vision. Moving from large groups to small groups enables this disconnect. It is up to everyone to keep those lines of communication open!

Not my problem it’s yours

Along with becoming more siloed, comes behaviors like ‘Sorry, that’s not my area’ or ‘I don’t have time to help you, I’m too busy doing what I’m doing’. Prior to the re-organization, ALL testing was everyone’s responsibility. Everyone had their own work to do, but also worked in a shared pool of activities when needed. The ability to shift focus was much easier before. Now, everyone is busy in their own worlds, and it is human nature to occasionally shuffle off responsibilities or challenges to others. Particularly if it is not your area of responsibility.

This is certainly something we need to get better with. Open communication, helping a friend out, keeping your eye on the whole company vision, and mostly, thinking of the quality of all products and features, as a whole team approach.

Wearing Many Hats

Sometimes we look good when we wear hats (women), sometimes we look silly (men). Depends on the context and just how silly the hat really looks. The idea of everyone in a small cross-functional team being able to do any job or task is idealistic. But is it reality? To a certain extent sure. Anyone can do almost anything in life. It is how well you do it, and how long it would take you that matters. I could climb a mountain, knit a sweater, fix a car, chug a beer, write a software program, write technical documents, etc. The question is how well and timely can I do them.

The answer to all of those in today’s speak would be ‘Meh’! Although, after all this time, you’d think I would be good at chugging a beer!?

We all have our specializations and strengths. We’ve gotten these through school, work, and personal life experiences. Let us continue to utilize our unique skills and let those better capable of doing the rest – do the rest! If at all possible.

Summary

I feel it has been successful move. We are delivering faster with higher-quality products than ever before. If we keep up with the goods and work a bit on the bads, then the sky is the limit for our potential. Quality is not just owned by testing, it is a whole-team approach.

Our whole team is owning it!

Release Testing – to do or not to do, that is the question

Release testing and releasing your product, are like peanut butter with jam, eggs with bacon, or beer with nuts. They kinda just go together. Even though you can have one by itself, we always tend to associate them together. But do they have to be?

Here are some questions you should ask yourself before blindly doing release testing and spending inordinate amounts of time, to determine whether a test is worth executing:

1. When was it executed last?
2. What has changed since then?
3. Is it manual or automated?
4. How long does it take to execute?
5. Has it ever failed before?
6. Do many of our users use this feature or not?
7. Do we have the expertise available for executing and analyzing the test?
8. What capacity do we have?
9. Is the test still valid?

Let’s take each of the questions and break them down:

1. When was it executed last – If this test was executed 2 weeks prior to your release candidate build being created, is there a lot of value in re-executing again? Maybe, maybe not…
2. What has changed since then – In an area that has had little to no new development or bug fixes since the last time it was executed – do you really need to run it?
3. Is it manual or automated – If manual, should you invest a full day of a testers time to execute this test? If automated, do we have resources available to handle failures, unexpected change, or script updates that you do find?
4. How long does it take to execute – In general terms, is it worth the time spent to do it?
5. Has it ever failed before – Alas, the tests that get run over and over and over and over and always PASS. Stop doing these if manual. Exploratory testing may be a way better alternative!
6. Do many of our users use this feature or not – If no one uses it, stop it. If very few use it, think about stopping it.
7. Do we have the expertise available for performing and analyzing the test – Testers are not quality gatekeepers. If you are not knowledgeable in the area, the technology or the product, you should not be doing release testing for this feature. Find someone who is more capable, tester or not.
8. What capacity do we have – Capacity planning should always be factored in prior to doing release testing activities. How many testers do I have, what do they know, how long do we have to test, what else is everyone working on? These should always drive your testing effort.
9. Is the test still valid – The curse of the out-dated testing documentation. If the test has not been modified in a long time, question it’s validity or appropriateness.

Performing testing activities should never be done just because it is best-practice or what you’ve always done. I’m a big proponent of testing what makes sense at the time, and what is or could be the biggest risk(s). Release testing does help give people the warm and fuzzy feeling that there is a certain level of quality in your product which is nice. I’m not saying to do it or not to do it, just do what makes sense with the time and resources you have and factoring in the potential risks.

The dichotomy of building software and the actual users who use it

@PaulHolland_TWN tweeted a statement by @michaelbolton that said: “One of the things that gets in the way of effective testing is the distance between the testers and the users”

My reply to him was: “Anyone who builds/tests any artifact that users care about, should have more interaction with them”

This has been a sticking point for me for a long time.

Many developers and testers are doing a great job delivering fantastic software for their users. They’re building just what their users want… or are they?

Many developers and testers have little to no interactions with their user communities. How do we really know what we are building is what our customers really want or need?

Is it:
1. Because the developer who is building it said so?
2. Because your manager said so?
3. Because the product management team said so?
4. Because the requirements said so?
5. Because the use case, user story, business workflow said so?
6. Because someone talked to some user somewhere and wrote some stuff down on a piece of paper and then wrote more pieces of paper with details on it and…

There is a significant disconnect in the industry, where the majority of those developing software are kept separate from their users. The old guard still exists – they send in the swat team to gather requirements, come up with use cases, business workflows, best practices in the industry, and write up a whole bunch of documents, hand them off to the R&D team and have them build it, validate it and ship it!

One significant problem… who is our user, how are we building what they really want and how can we validate it properly?

One of the challenges of testing, is trying to portray the persona of a real user of your system. If one has never met one, talked to one, has never seen how they interact with the system, how they modify it to suit them, how it changes the way they do their work (for the good or the bad) and how it interacts with other systems and programs they use, how can we accurately represent them in our testing – especially from a user experience perspective?

In some cases, we have someone who represents the user and conveys user intent and behavior. This reminds me of a game you play as a child, where you sit in a circle and whisper something, then they whisper it to the next child, and so on and so on, then it gets to the last person and it does not resemble the starting point at all. A lot can be lost in translation along the way.

Oftentimes, all we have to go off of is documentation, our own experience, what someone else tells us, our own understanding of the domain we are working in, and workflows that have been preset for our consumption.

Let’s try and change things. Let’s get involved with our users, talk to them, participate in on-site visits, see your product in action! Only then can we really, honestly, say…

“Yes, we are delivering high quality products to our users that they really want and need!”

The Testing Mantra – OQI

I posted a tweet today about what I considered to be the Testing Mantra – Observe, Question, Inform (OQI). Everything we do falls into these 3 categories in one way or another.

The other day, a friend of a friend asked me what I did for a living. I said I was a software tester. He asked me, ‘So what does that really mean and what do you really do?’ As he was not in the IT field, I asked him what he thought it meant. He said, ‘Oh, I guess you try and find bugs and try and break things right?’

This is a common misconception that many make when they hear that you are a software tester. This viewpoint is one of the common reasons other job titles have been spawned like ‘QA Engineer’, ‘Quality Assurance Analyst’, and ‘Test Engineer’, to try and professionalize the job. Finding bugs and breaking software, are causal effects of portions of testing, not the objective of testing.

I responded to him that what I really do is: ‘Observe products, systems, components, code and documentation, question what I find, and inform stakeholders and interested parties of my findings.’

Without getting into technical terms regarding tools, technology or process, I think I summarized it well what I do.

I think part of the challenge of professionalizing testing, is not to try and come up with bogus certifications and training, but in educating people on what it really means to be a tester!

Spread the mantra – OQI!

How to gain credibility as a tester

Sometimes testers have an identity crisis. Occasionally we are an afterthought, we are not consulted, not invited to meetings and kept segregated from decision making. Instead of complaining about it… prove your worth!

Here are some hints to gaining credibility:

1. Become the product expert! There is no excuse for not knowing the product inside and out. No one should know your product better than you. Product Managers may know how the product is supposed to work, but you should know how it ‘really works’!

2. Become the expert in integrations! There are many people that know how a certain product behaves. But, there are not many who know how well all products integrate with one another. Integration testing is a very valuable technique and one that should be leveraged.

3. Find what is missing on your team or project! We all have our standard work that needs done: writing tests, documentation, validating requirements, automating, etc. But the one thing you need to utilize is your brain! Find what is missing. Every project has gaps. Maybe the requirements are lacking detail, the design is incomplete, product is buggy, product estimates are off, staffing requirements are not adequate, testcases, plans or charters reveal inconsistencies, etc. Find the gaps!

4. Develop relationships! Shed the ‘introvertedness’. Get to know your teammates. People will respect you, and will consult you, when they see you care, are involved and show that you are an integral part of the team. Hang out together outside of work or at lunch.

5. Learn! Kind of a general comment but it was meant to be. Study your product, learn more about the domain you are working in, learn new technologies, read blogs and articles, take courses, read books, play (ok that was for @vds4). Self improvement is invaluable and is in your hands!

Most of these topics may be obvious but can be handy as a refresher. Anyone in any position can be irrelevant or out of touch.

Make sure you are not one!

Testing a new product – what should be the main focus?

Often, as a tester we can be too focused on the technology, the business requirements, the design, the data, the use cases, the user workflows/stories and the like.

When a new product is about to be built, we should ask ourselves “What ‘claims’ and ‘value adds’ are we are making about this new product?”

Does this not seem like the number one thing we should be concerned about testing? Should this not drive your testing efforts? I believe it should.

Whenever a new product or service goes to market, there are always the big marketing splashes:

  • “Faster than any other product on the market”
  • “Increases productivity of users by 50%”
  • “Allows collaboration of various entities for ease of operation”
  • “Integrates seamlessly with various other tools”

Yadda, Yadda, Yadda

Marketing messages and sales claims should be the major drivers for testing. These messages can also lead to other ‘value adds’ as well that aren’t explicitly stated or even implied. How can these claims be made without testing them explicitly?

Before testing commences, there should be consultations with senior level management, product managers, marketing and sales about how we are going to validate these statements and claims and prove the value add statements! Also, what do they really mean? What are specific metrics?

Think of it as a pyramid or a flowchart of some sort. Each ‘value add’ statement should be at the top. As you work your way down, you can get more involved and detailed about specifics like business requirements, technology/tools, use cases, etc. But they should all lead back to each of the value adds defined for the feature.

These should drive your test efforts and be your main focus. Should we be focused on finding bugs? Yes. Should you validate business requirements? Yes. Should you validate the design, write up various tests and test cases, Yes, Yes, Yes. But all of those are secondary, if it is not as fast as we said it would be, does not increase productivity like we said it would, does not integrate seamlessly, or does not help users collaborate with each other easier.

Thinking of these, will help keep you focused on what matters, and not simply how many testcases are passing/failing.

Automation – is it really worth it?

There has been many a blog written about both the positives and negatives of automation. My own personal take on this matter, is that the positives do outweigh the negatives, but it all depends on how it is being used. It is a unique tool in your arsenal but be wary of over-valuing! Automation should be done in conjunction with manual testing NOT replace it!

I’ve worked in situations where the whole focus was on automating. Development is done, Test Planning is done, now ‘Script, Damn You, Script’! This extreme focus takes you away from other valuable testing that you could be doing.

Some positives to automation are:

  1. Complex, time-consuming or mundane data setup or test execution can be automated to save time when you need to run it over and over again. Big plus!
  2. With a very stable feature, it provides a good ‘checking’ mechanism to ensure certain behavior still works as expected
  3. Is a great change management tool for picking up change that occurs in your system/product
  4. For a very large product/project, it allows you to have some coverage in an area, where due to time and personnel resources, you may not get to again. You might call that ‘Better Than Nothing Testing (BTNT?)’

Some negatives to automation are:

  1. Provides a false sense of ‘quality’ of your product, when too much emphasis is put into the scripted tests
  2. Difficult to maintain, especially with a volatile product that is in flux
  3. Time spent investigating possible failures quite often results in ‘change’ not ‘defects’ (cost vs. benefit)
  4. Tests may indicate ‘PASS’ but may no longer be valid or applicable anymore (see #1)

If the amount of time spent automating, maintaining, re-working, scrapping, reviewing scripts far outweigh the benefits, it makes sense to constantly review your process and make adjustments. Automation should provide you with useful information and have value. If it, consistently over time, is not providing worthwhile information (and/or defects), it is time for a change. It should also not be an all-consuming process!

Change can be a very daunting thing.

How do you balance manual and automated testing? Very good question, and is one that you should constantly be striving to find the answer to. Constantly review what you are trying to accomplish with the automation and adjust accordingly.

Since being exposed to context-driven testing, exploratory testing, agile, and the whole testing community as a whole, it certainly has opened my eyes to the future of testing. It is an exciting thing to be involved in, and to help invoke change in the way things are done.

Finding bugs that matter

There is nothing more frustrating then spending an inordinate amount of time researching a potential bug, scouring the repository to see if it already exists, running through various scenarios to try and reproduce, talking to SMEs, documenting the issue, setting up meetings and phone calls to discuss, etc, only to find out… the issue does not matter!

Some examples may be:

a) Being rejected by the stakeholder as not important, and thus will not get fixed
b) Falling into the cesspool, that is the bug repository, where it will sit, sight unseen for eons
c) Lowering the severity to where it ‘may’ be fixed at some point
d) Being cast as ‘something that a user would never do’ and dismissed

Often, what we think may be a show-stopper or a higher priority issue, is not viewed that way by others. As testers, we have a unique outlook on things, and can bring a different perspective. Certainly from a UX level, we can provide valuable input.

Here are a few suggestions that may help:

1. Sell yourself and the issue – I have found many instances of issues that I’ve raised, that were decided to never be fixed, only to have customers raise them as high-priority issues at a later date. Don’t be afraid to stand up, be counted, and explain why this issue matters.
2. Consult a second opinion – what may not be a big deal to one, may be a big deal to another. There are always multiple irons in the fire. Create a discussion. It is better to be fixed and found now and certainly much less costlier then doing it later.
3. Review the issue at a later date – what may have been dismissed earlier, may be taken more seriously as the product is/has evolved.
4. Accept it as what it is – sometimes we need to do this. We can’t fix every bug, and it may very well be that it is not important to be fixed, or will be back-burnered until the end of time.

Don’t be offended, sell yourself, sell the issue, consult second opinion(s), review later, accept.

It can be hard to do all these things, but in the end, each has its place.