Tutorial :Is your QA Team effective? [closed]


Is your QA Team effective. I am finding that many QA people that I have encountered are more verifiers and software breaking experts.

What I mean by verifiers is, they step through all scenarios provided, basically walking through the application and ensuring that it does what is supposed to.

What I mean by breakers is, they verify but they also diligently seek out scenarios that break the software and uncover defects.

Are your findings similar?

Of historical note on breaking software There was a team inside IBM in the 80's called the Black Team. They had a culture of saying that they had "succeeded" when they broke the software to encourage the identification of defects. They considered their work a "failure" when they failed to find/identify any faults in the software. On the other hand, the outcome of their "failure" was great reliable software...

And the book: "How to Break Software: A Practical Guide to Testing" by James Whittaker


In many instances I'd be delighted if QA did that. My experience is that if QA exists at all it often consists of poorly paid agency recruits with the barest understanding of the business. Doing QA well is hard, like programming is hard, it's rarely resourced or managed as the crucial part of the development process that it is.

On the other hand, the best QA I worked with had 20+ PC ghost images, and knew what to do with them, they were DB ninjas, who could also communicate with end users. There was a testbed of machines with widely ranging hardware. The lead was strong, knew when to push and when to let it go, she knew the customers, and she knew how developers thought. We soon learned that our standards of good enough were not the same as the customers. Unfortunately the product was still a shabby heap of junk, that's business, but it rarely crashed and the customers loved it.


I'm actually on a testing team, and a lot of what we do is verification and software breaking, but that's only part. We also maintain automated tests of the products so that bugs are caught earlier in the development cycle and run large scalability tests in cooperation with development in order to push the limits of our product and find the areas that can be updated to allow for further scalability in the future.

I think we're being pretty effective in our goals of getting the best product possible into the hand of our users, and simple verification is an important part of that process.

Also... I think that in a lot of ways QA is limited by the resources they are given, and we're lucky enough to have a lot of access to development tools, large scale virtual environments (as well as decent hardware based machines), and we're given enough freedom to actually experiment with the product.

Some of the testing is by necessity "run these test cases" types of deals, but that's only one portion.


I believe our QA team is effective. But, we don't employ testers, we employ QA Analysts who

  • Write their own test cases, utilizing the Business Requirements and Technical Design documents provided for the project. QA is involved as early in the development cycle as possible.
  • Have intimate knowledge of the business.
  • Know the difference between unit testing, functional testing, systems integration testing, regression testing, etc. In other words, they've put effort into studying QA methodologies.
  • Regularly perform regression testing on the system. This ensures that new projects have not introduced errors in working code.
  • Have the respect of the development team, along with salaries which are comparable to development salaries. Because QA has the respect of the development team, developers are quick to ask QA for opinions on some code changes, especially because QA has a better general knowledge of the business overall than a particular developer, who is almost always specialized in a certain area.

A QA team composed of a group of manual testers who use developer-provided test cases and do not have a grasp of QA methodology or the business domain may not be as effective.


I think this is a broad generalization (and someonewhat backhanded to someone who works in QA). I would ask what would you expect and want from QA ?

IMHO, software verification is what you want from a QA team including:

0) verifying that all requirements are implemented 1) verifying that all requirements are implemented correctly 2) reporting any deviations from requirements 3) providing documentation to reproduce the deviation from requirements 4) provide scalability, stresss, and capacity, fault tollerance support for the product

Sf that means calling that group "software breakers" and "verifiers" so be it. In my job, human health consequences can result from mal peforming software. It's not about me, my group or the QA group, its about the people that are helped from using the software I write. It's really about attitude.

Of historical note on the "breaking" thing. There was a team inside IBM in the 80's called the Black Team. They had a culture of saying that they had "succeeded" when they broke the software to encourage the identification of defects. They considered their work a "failure" when they failed to find/identify any faults in the software. On the other hand, the outcome of their "failure" was great reliable software...


I would hate to be a tester.

QA shares a lot in common with sales and tech support in that if you know enough to be really good at your job you wouldn't be doing that job anymore.

Think about it: you ever called up a tech support line? It's an unenviable job (and one I did once long ago). Basically you spend 90% of your time apologizing for something you have no control over (or just wasting your time trying to get something to work on 8 year old harwaare because someone's too cheap to lay out a few hundred bucks for a new PC, but thats another story).

In my last job our QA turnover was high. To be honest they'd have to be there for at least 6 months before I could be bothered learning their names. Prior to that it was just too likely they'd move on that it simply wasn't worth the time investment to train them up (speaking as a developer).

Don't get me wrong: some are good but they are few and far between just because if they are good they go on and become BAs or something else.


At breaking software no. Most of our QA people are not good at that. At being effective yes, they catch bugs that we miss and make suggestions on how we might be able to improve the UI when it doesn't make sense to them.


Our QA team does a bit of breaking and a lot of verification, both of which, I think, are the basic points of having QA. They're the final rigorous test point between your code and the production environment or RTM.

While they don't actively try to break the application, they're very good at doing the kind of things that users are going to try to do. This, at some point, does end up breaking the application.


I work in a 3 person development group. We each have to test and are generally left to our own testing devices (we do web app work primarily). It's not a solution I'm happy with. Sometimes users are involved in testing. I have a couple of users who are pretty sharp and reliable, but I never plan on getting input from the rest.


Effective? How do you measure that? # of bugs found? What if the code is really solid? What if they just find one bug, but it's a really important one? Do you blame them on the bugs that do slip through the cracks?


Our organization has only had a formal qa team for about a year and a half. Now we are trying to get a configuration management group (we only have one person for this now). To a person, all the development managers wish we would get more QA people rather than configuration management because we have found that QA is extremely valuable to us and configuration management only causes projects to get farther behind while we wait for someone to put the new code out on the release schedule rather than when it is ready to go. Developers have blind spots in testing their own code (we know how it is supposed to wrok after all) and good QA people will always find things the developers missed.

I think the biggest problem with getting good QA people is the salaries tend to be on the lower side. There is no reason for a technical person to move to that career path if he or she is already making more than the QA people make. So they end up with people right out of school or who have no technical background. These are exactly the people who don't have the knowledge to set up good automated testing or the business acumen to understand how users need the software to work. (I have seen this with business analysts as well who are also critical to the quality of the final product. I just saw a job opening for one of these that paid 30K a year. I'm sorry but the guy who is going to determine what the requirements for the software that is critical to the function of a company needs to get paid more than the receptionist. And managers wonder why software quality can be so low. Bad requirements = bad software close to 100% of the time.)


In our place these are just students that click through the application by going through predescribed scenarious. No initiative testing, no interest, nothing beyond what was given. As the fact it's me (developer), our PMs and sadly customers who do the final testing.

I find out I'm probably the only guy entertaining the idea of doing something non-standard just to look what will happen. It happened to me to find bugs in quite used sections of software by doing something that noone had an idea of doing for several years. And I'd been there in company for just a few months.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »