Struggling to understand Alpha, Beta, MVP? You are not alone!

Since my transformation to agile, continuous delivery and the concept of attempting to deliver value to customers quicker than ever before, I have often struggled with the concept of an Alpha, Beta or MVP (Minimum Viable Product).  What makes it an alpha/beta? Viable to whom?  What makes it minimum?  How is it possibly a product in this early, unfinished state?

I recently read a very good article by Henrik Kniberg (@henrikkniberg) where he tries to explain the concept of MVP as well: http://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp.  I personally love his analogies to ‘Earliest Testable Product’, ‘Earliest Usable Product’ and best of all ‘Earliest Lovable Product’.  Although articles like this help in the understanding, it really does not help testers understand how to test them.

One of the major stumbling blocks for me as a tester is determining what to test, what not to test, and what to not worry about for now.  This often generates statements from the business like:  “It’s good enough for now”, “We’ll cross that bridge when we come to it”, “Let’s just wait until we get feedback”, “Um yeah that’s a problem but…”.

When my company switched to agile teams, my team was one of the first to build a separate component feature that was going to get into the hands of the customer early and often for feedback.  There were many challenges with this obviously. The toughest part for me (and my team) was understanding how to test this thing and what to care about from a quality perspective. The first time our PO said that we were going to ship the product to them at the end of the next sprint, I went into typical tester panic-mode!  Drop everything else, try and test everything, report everything I find, document everything, etc.  To encounter blase feedback to issues I was finding was disheartening and challenging to understand.  How can we deliver a product with all these incomplete pieces, with all these known issues (many not even raised) but more importantly – how can I help the team make informed decisions about the level of quality and what is important for the team to know about prior to shipping?

Examples of many of the things I found were:  design wire frames did not match what we had, use cases were incomplete and workflows were not executable, usability issues, functionality bugs, performance and reliability issues, data validation and error handling not done and the list goes on and on.

I tried my best to model my testing to represent the product in its current state of development, so I always knew what it was supposed to look like and behave like at any given point in time.  I was doing this well. But, how do I handle all the issues I was finding and how do I wrap my head around the concept and idea that the product is ‘good enough for now?’  As a tester, I found this very difficult.

I still do not have all the answers and always welcome feedback and ideas!  Below are some items that I do that help:

  • Document known issues (not necessarily as raised bugs) and keep them around for discussion purposes with the team.
  • Continually provide feedback to the team as to your own perceived level of quality – find ways of doing this effectively.
  • Get the whole team involved in traditional testing activities (including the PO). You would be surprised how quickly issues can get fixed when others see how annoying they are too!
  • Get the UX/UI designers involved in the testing of design and usability issues. This can help in making the call that ‘this is good enough for now’. Pairing is an awesome way to do this.
  • Functional workflows – inform your team about the inability of users to perform specific actions in your product. It may not quite be as MVP as they originally thought.
  • Create a mechanism to ‘talk testing’ and not just about bugs. Often it is hard to talk about testing, the challenges you encounter, the risks, and the health of your product.
  • Disassociate yourself from the notion of trying to test everything! It cannot be done nor does it need to be done. Continually have discussions with your team about what are the most important things to test today, tomorrow and as time goes on.
  • Instead of simply reporting problems, inform the team about the impacts these problems have. These are much better discussions to have.

Most importantly, don’t sweat it so much!  It doesn’t have to be perfect.  It is early, expectations are lower, and there should be an understanding amongst the team, the customer and the business that this is an iterative approach to development and delivery.  Your team will get there.  The customer is getting involved earlier in the process and incomplete or not, this is a way better way of delivering value to your customers.

 

 

 

 

 

Am I really a valuable member of my team?

I’ve been sitting around with a broken ankle and it has given me an opportunity to catch up on social media, twitter, blogs and articles (I wish it was a cool story like ‘hey guys, hold my beer, watch this…’ but sadly it was not).

There have been many tweets going around lately about the value, or more specifically, the lack of value, of having dedicated testers on teams. Not a new thread but a popular one nonetheless.

Testing is hard. Talking about testing is hard. Showing teams the value of testing is hard. We need to be better at sharing this knowledge and having stakeholders rely on us for vital aspects of the software development process.

If you feel as a tester, that your value is not appreciated, that you are deemed to be insignificant or can be easily replaced, then shame on you.  Perhaps take a look in the mirror and question yourself about the value you are adding to your team.

In Agile, ideally we want to have these homogeneous teams with heterogeneous capabilities. We want everyone to have diverse skill sets with the ability to jump in where needed and get the job done. I believe that there is still a need for specializations however. Testing is one of those specializations. On the surface it appears easy and replaceable, but when really analyzed, the value is tremendous.

Here are some ideas for the value you should be providing. This list is by no means complete. Think to yourself – am I doing all this stuff?  If not, does my team really value me and my contributions and am I doing all I can to make my team succeed?

Product and Feature Knowledge:

If your team does not come to you to ask questions about the products and features, information about the users or personas, the functionality, or use cases or workflows – then you are failing.  As a tester, you have a unique ability to not be bogged down by technology, code, and various frameworks that developers have to deal with. Take the time to dig deep. You should be the expert, no one should know your products and features better then you!

Domain Knowledge:

Many think domain knowledge is solely the primary focus of the PO or Business Analyst. I think it should also be the tester. Who is better suited on your team to represent the users than you? If you do not understand your users day to day job, how they will interact with the system, the industry they work in and how we are trying to help them, then you are failing. Learn about the industry, subscribe to magazines, bookmark industry specific websites. Take some scheduled time every week or month to learn more. It will help you question designs better, write better acceptance criteria and tests, design better scenarios, but more importantly how you can help your user base do their job better and more efficiently!

Installations and Configurations:

Since you install, uninstall, and configure your applications hundreds or thousands of times, you are the teams expert in this field. Since most applications today can be configured many different ways, you should be invaluable to your teams. Learn it, understand it, dig deep, and experiment!

Test Strategy:

Everything you test should have a test strategy applied to it. Do not follow a templated approach and apply that to everything you test. Context of what you are building is key with testing and this is where a skilled tester can really shine. Otherwise, teams can tend to follow the same iterative approach to testing everything and not be doing the best they can.

Quality Focus:

Everyone wants to think of quality and have it be foremost in everyone’s mind. But no matter what anyone says, it is not.  But it should be for you!  Your job is to keep everyone on track, thinking of quality first, how you can help in this area and make your team be successful and deliver high quality features. Always be thinking of quality initiatives you can introduce to help your team.

Thinking Beyond Functionality:

This is where many think testing is easy. All we have to do is test the acceptance criteria, execute a simple test or scenario and close off the story. There is way more to testing a story then that. Educate yourself and your teams on test modelling, design, exploratory testing, assessing risk, identifying and testing quality criteria, boundary and edge cases, etc. These should all be thought of before closing off your stories. You are the tester who should be quality focused – guide your teams and educate them as to what testing is really all about!

What To Test and When:

Testing is expensive, 100 percent code coverage is impossible. We cannot test everything. You need to use your experience and knowledge to test what matters the most and utilize the time you have to do the best you can. When tasked with ‘We need to test feature X’, education and communication is important to do this well.  Be smart in how you test, what you test, when you test and articulate your testing and decision making to your team. Testing is a thinking activity and should not only be relegated to running a series of automated checks and executing some overly prescriptive test cases.

Exploratory Testing:

To be clear this is not adhoc testing people!  This does not mean ‘play around with it for a few hours and see if you find any problems’.  I am not going to get into what exploratory testing is all about. If you don’t really know, then you are failing as a tester. This is what testing is all about, not running through a test plan and test cases. Learn it, share how to do it with your teams and think beyond the requirements.

Risk:

You should always be risk averse.  You are your teams risk analyst. Always monitor what new functionality is being added, what bug fixes are being applied, and what change requests are being implemented. Your teams will only care if knowledge they have coded before still passes (automated checks). You are there to think deeper, react to change and use your skills and training to uncover other potential problems.

Applying Oracles and Heuristics:

How we do know what to test and how do we measure the success or failure?  This is where the use of oracles and heuristics come into play.  If you do not know what these are, how to use them as part of your test strategy, then learn about them and do it.  It will change your testing forever.

Test Artifacts:

Gone are the days of the long test plans containing reams of prescribed test cases (or hopefully we are getting there). We need to be smarter about how we document testing for the future. Again, context is key here big time.  If something is overly complex and tightly integrated with specific data and results (like a report) we need to document that differently then we would a page where we enter customer details.  Document accordingly with just enough information required. Use previous testing to help guide you. Smart test documentation is key and a vital component to what we do.

Product / Feature Health:

Continual monitoring of the health of your products and features is important. Come up with ways to show this to your teams and help them make quality-based decisions. You would be surprised how valuable this can really be!

Targeted Release and Regression Testing (not checking):

You have tons of automation (unittest, ui, api), congrats, job well done. These are very important and key to your teams success.  We cannot manually test everything all the time.  But remember, this is not testing, this is just checking that everything still works as expected based on the knowledge you had at the time. Targeted release and regression testing should still be done. Products change, code changes, and impacts to functionality changes over time.  Be proactive, and smart with your testing to target things that your automation may not cover or more importantly – does not know about!

What To Automate:

You are the key here.  We want to limit the amount of automation we have.  We want the automation to be targeted and not have duplication.  This requires a lot of analysis, planning, design and execution.  Start small, isolate what is important to automate first. Take the lead, find out what your devs can do, find out what you can do, decide as a team if you are happy with the combined coverage.  Be smart about automation and you will be very happy you have it!

I could list off another dozen items, but for the sake of brevity, I am capping the number for the purposes of this blog. I am not saying that any of these cannot be done by other team members.  Other team members can certainly do any of these, but they will not be top priority in their day to day activities.

They should be for you!