How Does TDD Change QA?
A viewer asks:
Let’s assume you are part of an organization that has good, smart QA test automation engineers working alongside developers. Given your testing breakdown explained in the video [“Front-End Unit Testing in a Nutshell”], how do you see the division of labor breaking up between the two groups?
As Elisabeth Hendrickson describes in her article, “Better Testing, Worse Quality,” we get better software when developers take responsibility for delivering quality results. When a QA/test group is responsible for a “quality gate,” it’s too easy for developers to succumb to schedule pressure and hand off poor-quality work to QA, which inevitably leads to more bugs escaping.
So my approach (and the general Agile strategy) is to prevent defects during development rather than finding and fixing them in test. Ultimately, I’d like to remove the need for a separate QA step prior to release. Test-driven development (TDD) is a key part of this.
What Does QA Do?
TDD isn’t perfect. It can help programmers discover when they’ve programmed an algorithm differently than they intended. It’s good for creating a robust suite of regression tests. But it’s useless when a programmer misunderstands what he or she should be doing. It can’t catch incorrect assumptions.
This is where QA comes in. I follow Brian Marick’s model and think of testers as technical investigators for the team. Rather than being a crutch for the team that enables them to write bad code, their job is to help the team understand what they don’t know—but need to.
This typically takes two forms.
1. Help customers understand and communicate what they want.
“Customers” (by which I mean the people who influence what gets built) are notoriously bad at expressing their desires. They often gloss over or forget important details. Testers, with their detail-oriented mindset and ability to bridge technology and business, can help customers express themselves with the detail developers need.
This can be done by helping customers create “test cases” on a whiteboard. I put “test cases” in quotes because the value comes from communication, so there’s no need to be formal. They’re really examples. The programmers will use these examples as appropriate as they do TDD.
I worked with a content delivery platform several years ago. The platform used micropayments, so the company didn’t want to charge customers’ credit cards every time they bought something; the credit card fees would have wiped out their profit.
Those payment rules were complex and finely tuned. To make sure we understood them all, we created a series of examples on a whiteboard. We asked what would happen when a user incurred one micropayment, or one large payment, or five small payments, and so forth. Those examples formed the basis for test-driving the billing algorithms.
Customers have limited time and energy for this sort of detail-oriented discussion, so it’s important not to wear out your welcome. When they ask for something ordinary, such as a login button, don’t ask for detailed examples. Just explore the question enough to ensure that there’s nothing unusual, then move on. Save the details for when you need them.
This work happens in parallel with, and just slightly ahead of, the programmers’ work on stories. The idea is to help the programmers understand the true scope of what they’re building and to be ready to help when they have detailed questions.
2. Help the team find and fix blind spots.
Some defects result from programmer mistakes. TDD helps with this. Some result from error-prone designs. Refactoring helps with this. Some result from misunderstood requirements. Better customer communication helps with this. And some are the result of systemic flaws.
These “blind spots” create across-the-board defects that nobody even thinks of. A classic example is SQL injection vulnerabilities. If your programming team isn’t aware of the need to parameterize their queries, they’re going to be vulnerable to SQL injection attacks throughout their entire system.
Not all blind spots are so dramatic or far-reaching, but the goal is the same: to discover defects that indicate systemic problems. This isn’t a pre-release gate (although, until you have confidence the process is working, it may be). Instead, it’s a search that can be conducted on “done” or even shipped software.
Because the goal is to find blind spots, regression tests and pre-conceived test plans aren’t a good fit for this activity. If the team is doing TDD well, the programmers should have produced all the regression tests that are needed. Instead, a testing approach that takes advantage of testers’ creativity, experience, and attention to detail is best. Exploratory testing is a great fit.
When defects are found, the team conducts root-cause analysis, determines whether there’s a process problem that needs fixing, analyzes the design of the code to see if refactoring would prevent these errors in the future, writes a test for the specific problem, and fixes the bug.
For example, if exploratory testing discovers a SQL injection vulnerability, a team might discover that the root cause of the problem is lack of security expertise leading to SQL injection being overlooked as a potential problem. They might decide to hire a new programmer with skills in this area, or take a course. They would examine the design of the code and might discover that they scatter calls to the database throughout their code, and decide to refactor their code to abstract their database behind a single module that uses parameterized queries. Then they would write a test that attempted to perform a SQL injection attack against that module and validate that the attack was unsuccessful.
Tester Specialties
In my experience, testers tend to gravitate towards business-focused or technically-focused work. Both types of tester address the two areas above.
Business-focused testers will help customers express functional requirements through examples (when needed). They’ll conduct exploratory testing that focuses on the user experience and ways users might trigger defects, either accidentally or maliciously.
Technically-focused testers will help customers express non-functional requirements, such as performance and scalability needs. They’ll create sophisticated test platforms that validate and explore the systems’ capability under load. They’ll conduct exploratory testing that focuses on system capabilities and they’ll explore how a determined attacker could subvert the system with specialized tools.
From Here to There
I’ve described a sort of “perfect world” scenario here. The goal is to get to the point where QA doesn’t need to test the code before you release. To do that, your team needs to be producing nearly zero defects: doing a good job with test-driven development at the unit, integration, and end-to-end level; keeping technical debt low and refactoring frequently; communicating well with customers or product management.
Most teams aren’t so fluent at Agile. If that includes your team, start off with a hybrid approach. Keep testing the software before it releases or at the end of each iteration while working as a team to improve defect prevention skills.
As those skills improve, testers will have more time to spare for other things. They’ll help customers define what they want in terms of detailed business and non-functional examples. They’ll also look for systemic blind spots, both in terms of user experience defects and underlying system capabilities.
And eventually, with practice, you’ll be doing less testing and getting more quality.