Skip To Content

Making QA Work Better

Are you making a critical mistake with how you approach QA? Sure, you realize that it's an important final step to your software development and solution delivery projects. However, you can't simply tack it on on at the end of each project and expect success.

So what's missing? The integration of QA. Today, I will outline several key approaches to assist your QA approach for your organization.

There is a large amount of material available online that defines what QA is, how it should work, and the results it should produce. While this is all great theoretical information, when the rubber hits the road, especially for digital projects that move fast and don't always have enough budget, it's hard to know what concepts and approaches will actually add value.

One thing we've learned over the years at BlueModus is that QA is not a discreet bolt-on that can be added to any team or project. It's much more like a sliding scale; any given scenario will have its own unique set of characteristics. Each individual case may demand QA that is more or less integrated into the daily process, more or less valuable at helping the team deliver good code, more or less fast and efficient, etc.


Because there is no one universal approach to QA, it can be difficult to know how best to apply the proper principles and processes. In our case, BlueModus typically runs agile projects (mainly Scrum), so we do have a few key principles that can help us:

  • Testers are part of the team, not a sub-set of it (and the developers themselves often test).
  • Each sprint should produce useable, and potentially releasable product increments.
  • Quality goals should not decrease during a sprint.
  • Each increment is additive to all prior increments and thoroughly tested, ensuring that all increments work together.

Based on these principles alone, it's clear that testing needs to be integral, repeatable, efficient, and complete to really make a difference. Developers can't just throw something over the fence at the end of a sprint. QA testers can't operate in a vacuum. If developers don't do any testing at all, lots of bugs are bound to come back, jeopardizing the sprint. Additionally, if developers have no easy way to ensure that previous work has been properly tested, they'll end up doing more testing than development. The list of implications is quite long.

The bottom line is that QA is not an exact science, which is a primary reason why teams struggle with it so much.

Over the years here, we've learned plenty about effective QA. Some of these lessons were learned through a lot of sweat and tears, others by good fortune and smart thinking. Either way, I hope you find them as useful as we do.


No two engagements, clients, or projects have the same needs with regard to QA. As each instance is unique, it is nearly impossible to implement a one-size-fits-all model. For each engagement, make sure that your QA requirements are sensible, achievable, and add value to the process. 

For example, a small project to deliver a simple content site should have simple standards for QA. Browser testing, copy QA, and load testing might be all you need. However, an ERP-integrated e-commerce site will require deeper levels of those same QA aspects, as well as functional testing, security testing, etc.

Be sure to never burden projects with too much unnecessary QA. At the same time, be careful to not cut corners, as insufficient QA will spell all sorts of trouble: missed deadlines, budget overruns, failed deliverables, and eventually even failed projects.


It's tough to properly QA anything if the testers (or unit/automated tests) don't have solid requirements. At the same time, one can get bogged down if the requirements are too complex and difficult to maintain.

A good rule of thumb is to make sure every task a developer performs has a way to confirm it was correctly done and works as intended. 

This could be as simple as defining a binary yes/no requirement (Is the button green now?), or outlining a more complex set of rules with starting conditions, test scripts, and results. In any case, if a developer has no objective way of knowing if a task is done, other than their own common sense, subjective differences of opinion will start to creep into the project. This is never a good thing, as basic assumptions made by different people can vary dramatically, which will end up costing more time and money, without question.


It will never be perfect, it will never be "100% correct", but as long as you're always striving to improve your approach and processes, you'll put yourself in a good place. Make sure to include QA feedback (both results and experiences from the team) in your retrospectives for each project. To help your team avoid future mistakes, work to create internal best practices and training guidelines, as well as leverage all the good work that has already been done in the past.

When (not if) you have issues with QA on a given project, put a positive spin on it and use it as a learning experience. Challenge your team to find creative approaches for improvement and optimization. Every lesson you learn will have some cost, so try to get the most value you can out of each one.


QA is a vital piece to successful software development, not just as a line item bolt-on, but fully integrated and immersed with the other aspects of each project. In my next post, you can look forward to several more practical lessons, focused around the proper way to set up your team and internal tools for the most efficient QA possible.

I hope that this has been helpful, and I'd love to hear your feedback. Do you have experiences that are in line with the thoughts above? Or that lead you to disagree? It's a complex topic and I believe that teams can learn a lot from each other by continuing to share our knowledge and understanding.

Read Part 2 of this series here.

Helpful links referenced in the above post: