When doing some bug fixing recently I, more or less accidently, stumbled upon a really easy and useful technique to quickly test and fix an application. Since then, I have applied the technique a few times and the results are really astonishing. Since I found it that effective and straightforward, I would like to share my experience here.
The setup is simple: Get a programmer and a tester to sit next to each other, each having their own PC. While the tester opens fire on the application, at the same time the developer fixes the bugs that are found immediately. These fixes are then validated by the tester.
This technique works best if you apply a few rules. Here are some rules we applied in our sessions:
- Preferably, the tester is not a developer or hasn’t been working on the application. Ideally, this should be the client or an end user, but an analyst or product owner works well too.
- Use an interactive document so that while the developer is fixing a bug, the tester doesn’t have to bother him. In our case, we used a simple Google Docs spreadsheet. The tester adds failures and the developer updates the status. Use a few predefined states so it’s easy to see what has been fixed and what state it’s in (in source control, deployed, fixed but not integrated, …)
- Make the developer’s changes visible to the tester immediately. You can do this by automating deployment to a test environment from a CI-server. Alternatively, you could even let the tester access the developer’s web server directly. It’s important that the access is nearly instant, so that the tester can validate the fix immediately.
These rules are not hard rules, you should probably adapt them to suit your specific needs.
Why does it work?
In my experience, this setup is so powerful because it eliminates any barrier between the tester and the developer.
The tester doesn’t have to create reproduction steps and screenshots. Communication is direct and immediate, so the developer can just ask and look at the tester’s screen. This eliminates countless e-mails and discussions about how to reproduce the behavior and whether it’s intended behavior or not.
Another aspect is that it’s a very focused activity. Usually bug reports come in through a ticketing system and the developer fixes them in a dead moment or in a time that was assigned to bug fixing. This can bring many distractions. The end goal is still fixing the user’s problems. But since that user doesn’t have a face or is not with you, that goal seems very blurry and far away. When you’re interacting one-on-one, the goal is satisfying the needs of the user which sits right next to you.
There’s also no context switching. When the developer reads a bug report, he’s already fixing bugs. When the tester is asked to reproduce the bug, he’s already inside the application. When the tester is asked to validate the fix, he still remembers the bug.
This technique is especially useful for eliminating minor bugs like spelling mistakes, issues between different browsers, visualization problems, small behavioral changes … In general, it’s a technique that works really well for bugs that are easy to solve. When you’re getting bugs that affect the whole system or performance bottlenecks, it’s probably better to create a bug report. What you want ideally, is that the developer can keep up with the tester. What we usually do is, when a bug is found that is harder to fix, leave it open and take the next one. At the end of the session, the bugs that were not fixed in the session are then added to the bug tracking system.
I have found this technique to be really easy and useful and have applied it various times with success. I consider this type of testing a sort of code review, but from a functional point of view. You could say that this stands to BDD as code reviews stand to TDD.
What do you think? Have you tried similar techniques or do you see any pitfalls that I haven’t mentioned? I’m curious to find out whether this is a common technique and if there are situations where you can apply it that I haven’t though of.