Skip to content

Testing and the QA process

lap-scene

So, on which step is test case 200.10 failing exactly? Can you write up a coherent bug report this time? We’re having problems reproducing it.

Sample Speakeasy Test Plan

Sample TMO Risk Assesment Test Plan

Intro:

In positions at Speakeasy, Inc and T-Mobile USA I developed QA skills and methodologies. At both companies, I was working on the first roll-out of a variety of products and part of our work flow was to perform certification testing on pre-production services and devices. This article will explain the evolution of my thinking on the subject of testing. At the top of this page I’ve provided two sample test plans for reference.

Speakeasy, Inc (2006-2010):

As a Customer Premises Equipment Engineer, the work flow would start with our Product team who would come to use with a new phone, new application layer gateway, or maybe a service in our Broadsoft installation. One of the major challenges at Speakeasy was that there was no real plan for End-Of-Life; our test plans for voip handsets or other hardware were very simple at first with about 12 test cases, but they became increasingly complex as our product line grew. In the above sample test plan, you can see the complexity very clearly. This test plan would take about one man month to execute and was very labor intensive. If any bug was found in the process, we would have to work with the vendor to get it recognized and addressed. The situation was getting out of control and our test plans were designed organically with a home grown methodology. I assumed that we were not the first ones to ever encounter this problem; I decided to start looking for alternatives to how we were creating our test plans.

I found a copy of “Managing the Testing Process” and started reading through it. The book focused on Risk Based test creation. This part of the definition from Wikipedia gets exactly at what we were facing at Speakeasy:

“In theory, there are an infinite number of possible testsĀ [my italics]. Risk-based testing is a ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective.”

I started to pitch these new ideas to the CPE team and we began to adopt them.

T-Mobile, USA:

As a Network Design Engineer at TMO, I used what I learned at Speakeasy to design tests for Session Border Controller configuration in the pre-production IMS roll-out. Test efforts focused on interoperability with configurations and software upgrades on the Acme Packet Session Border Controller product. In the second sample test plan, the first tab deals with identifying risks and assigning them a priority. The actual test cases themselves have a much better ID scheme; if a tester said, for example, that he was having an issue with 200.10.08, I would be able to locate immediately the part of the test plan that was failing: test suite 200, case 10, step 8. The test plan executor now has a superior way of writing a bug report and providing that information to the vendor, or whomever will address it. You can also see time estimates for effort and duration included in each test case; this allows the test manager to provide better time estimates to management, as well as flexibility should management decide to shorten the test schedule. But, most importantly, the format of the risk based test cases is much more flexible than the original Speakeasy test plan: it’s easy to add, or remove steps. The formatting in the Speakeasy test cases would break if you needed to add a step and required a lot of tinkering on the part of the tester. A test plan should be a map for the tester and allow them a little freedom to change things on the fly; if they decide to add a step and go to an interesting place, the test plan should support this.

Conclusion:

Quality Assurance is not very romantic and there is often conflict between management, software or hardware producers, and the testers. A risk based test model allows the testing staff a way to present this effort to management in a way that is flexible and allows management to decide what level of risk they are comfortable with; it gives the testers better tools for reporting findings to whomever will tackle the bugs; and with a graphical representation on test coverage, the testers can get a better idea of where to focus effort and where effort is being wasted. Creating a home grown system can be rewarding, but it also important to remember that someone has probably tackled this problem before and might have some good suggestions to follow. It usually pays to look around and see what tools are available.