Skip to content

shlomif/what-you-should-know-about-automated-testing

Repository files navigation

What you should Know about Automated Testing

Shlomi Fish <[email protected]>

Introduction

Automated testing is a software engineering method in which one writes pieces of code, which in turn help us ascertain that the production code itself is functioning correctly. This document provides an introduction to automated software testing.

Motivation

So why do we want to perform automated software testing? The first reason is to prevent bugs. By writing tests before we write the production code itself (so-called Test-First Development) we ascertain that the production code behaves according to the specification given in the tests. That way, bugs that could occur, if the code was deployed right away, or tested only manually, would be prevented.

Another reason is to make sure that bugs and regressions are not reintroduced in the code-base. Say we have a bug, and we write a meaningful test that fails when the bug is still in the code, and only then fix the bug. In that case, we can re-use the test in the future to make sure the bug is not present in the current version of the code. If the bug re-surfaces in a certain variation, then it will likely be caught by the test.

Finally, by writing tests we provide specifications to the code and even some form of API documentation, as well as examples of what we want the code to achieve. This causes less duplication than writing separate specification documents and examples. Moreover, the code is validated to be functional because we actually run it.

An example

Let’s suppose we want to write a function that adds two numbers. In pseudocode, we can write:

function add(first_number, second_number)
{
    return 4;
}

We can write a test for it similar to:

assert_equal(add(2, 2), 4, "2+2 == 4");

This makes use of a test framework’s function called assert_equal that may have the signature function assert_equal(got_value, expected_value, test_msg) which will succeed if got_value is equal to expected_value, and fail if it is not.

This test will pass! However, the implementation is incomplete, so we should write more tests:

assert_equal(add(0, 0), 0, "0+0 == 0");
assert_equal(add(-1, 1), 0, "-1+1 == 0");
assert_equal(add(1, 5), 6, "1+5 == 6");
assert_equal(add(-6, 5), -1, "negative outcome");

And so forth. To get some of these tests to pass, the code may need to be corrected.

Note that the exact syntax varies based on the automated tests framework that one uses, but the concept is the same almost everywhere. Moreover, there is nothing magical about assert_equal() and a sample and naïve implementation of it may be this:

function assert_equal(got_value, expected_value, test_msg)
{
    if (got_value == expected_value)
    {
        return True;
    }
    else
    {
        warn("Failed " + test_msg +"!");
        throw TestFailure.new();
    }
}

The programming cycle

Normally one can work by adding a new test to the test suite (which may include more than one assertion), running the test suite, and seeing it fail ("red line") with the new test alone. Then you write the code to get the new test to pass and watch the test suite (including all previous tests) pass. ("green line".) Then you commit your changes to the version control repo.

After that, one can perform one or more commits of refactoring so the internal quality of the code will improve while the tests remain passing.