«« Ahhh the Memories | Main Page | Google Payz Me »»
blog header image
Testing is a First Class Job
Programmers naturally assume that, in general, "things work." ... Testers, on the other hand, are all descendants of Murphy and assume, in general, that things "don't work right."
- from Testing Extreme Programming

If you've never been a tester on a software project you might not understand that quote very well. Testing is not a very glamorous job on traditional projects. It's not sexy like development positions and there's an implicit battle going on between developers and testers. It's almost as if sometimes testers are to blame for finding bugs, when they should be rewarded.

At a job I had as a tester I gained a unique perspective on "traditional" software process, whereby features were added by developers and checked by testers. If a defect was found in the system, it would go through the following process:

  1. defect is logged in the system by a developer, tester or customer
  2. defect is triaged to the appropriate product group
  3. defect is assigned to the correct developer in that group by the group manager
  4. defect is fixed by the developer, depending on the developer's own priority queue (then a build is made containing the fix, usually overnight)
  5. a tester installs the build and verifies that the defect was fixed and closes it

Sounds pretty organized, right? Yup, it was -- but slow! It wasn't uncommon for each of those steps to occur on a different day. Yup, that's five business days minimum to completely fix a problem, and it could be the slightest little problem ... it had to go through this process. Usually most defects took much, much longer and the testing team always had a backlog of defects to verify were fixed.

Developers on this team fixed problems as they got them as defects. A unit test suite was used by the testing team but it was incomplete and about half automated, the rest were manual GUI tests done by the testers every few months. The automatic portion of the test suite was not executed on each build (see the Eclipse project for an excellent example of how to do that). The developers did not run the test suite at all, not even a small portion of it to check for regressions before they checked their code in.

So it was quite easy for a developer to check in a regression that could have been caught by a unit test almost immediately before it was checked in. Instead it took at least a full week to fix a regression -- and that's if it was caught immediately (unlikely). Predictably, the testing team ran into the same regressions over and over again.

So what's the problem? Not enough management direction? No I think it's the developers. They are ignorant of the problems they are causing because they have no feedback. By the time they get a defect for a regression, it's two weeks later and they have no idea how they caused it. With a unit testing suite, that feedback is immediate.

As well, the "dotCom boom" era created a niche for a lazy developer position centred on giving high paid and talented coders sexy work to keep them around. If you were one of these guys and your manager started telling you to start writing tests with your code, you'd probably quit. So there's pressure to keep giving them sexy work, which ultimately gives them less immediate accountability for regressions. Just write your code and the testers will find your mistakes, right?. There's no way to tell you just "broke the build" by putting in a regression because there's no test for it! How convenient.

Developers need to learn from their own mistakes to become better developers. They need to have an ingrained sense of quality in their work, instead of a casual oh I'll fix the defects as they come into my queue attitude. Only then will they be better managed -- when that wise men said "managing programmers is like herding cats" he wasn't kidding.

I've learned more about my own coding from test-driven development than from any amount of straight-up development experience. It's humbling to have to write your own tests -- and for recruiters and managers, that's probably the worst part. Where do you find people with the good sense of quality and humility to do it?

Posted at April 30, 2004 at 12:20 PM EST
Last updated April 30, 2004 at 12:20 PM EST

I love testing - I love the fact that I can say with some certainty that my stuff works according to how I believe it should (aka the unit tests).

There is something powerful about getting that green bar - and every so often just loading your application and watching it fly.

» Posted by: aforward at April 30, 2004 03:50 PM

Exactly. I love the high certainty I have in unit tested code. Being able to say "I know this works well" and be confident about it is powerful too.

» Posted by: Ryan at April 30, 2004 09:36 PM

Having been testing applications for the past four months I have to say that many of the errors I find are related to the interaction between components. The two seperate entities work properly on their own but problems are introduced during their interaction. I realize that unit tests could be made to test the communication but now where do we draw the line (if there is a line...) for test writing?

» Posted by: James at May 1, 2004 05:21 PM

Integration testing is just as important as unit testing. Depending on the level you're at, an integration test could be like a unit test (testing a unit that uses several sub-units, which in reality is what all units do) or it could be a customer acceptance test, which is also like a unit test.

Customer acceptance tests are written at the highest possible level of API just below the GUI to simulate user stories or use cases.

There is a "line" that you have to decide on, and it's the line you draw with respect to how much GUI to test automatically. The more automation you put in GUI testing, the more fragile and less agile it is. The less automation you put in, the more manual testing needs to be done, but it's more agile.

» Posted by: Ryan at May 2, 2004 11:44 AM

Search scope: Web ryanlowe.ca