Test to Fail

In most disciplines, tests are important. An untested product is hardly better than no product. Untested products are prone to fail.

But not all tests are built equal. The human tendency is to make tests that test for success. For example, suppose I had a routine that checks if a number is greater than another. And suppose I implemented it as follows:

greater(a, b) = a ≥ b

The error in this code is obvious. There is a greater than or equal sign (“≥”), but I meant to use a greater than sign (“>”). But in practice, in more complex projects, errors will sneak in. Now imagine that my test suite looked like this:

@test greater(2, 1)
@test greater(1, 0)
@test greater(1, -2)
@test greater(100, -100)

What a comprehensive test suite! Unfortunately, this test suite will let my incorrect implementation pass. Why? Because I never once tested for failure.

Each test was written to see if the function returns the correct result when the first argument is actually greater. The test cases were written with passing in mind, not failing. We have a subconscious tendency to test for success, not failure. Tests for success are useful, but tests for failure are necessary too.

Outside of computer science, the same principle applies. Scientists and engineers would benefit from negative results as much as positive ones. In short, give failure a chance.