Test Driven Development in Python

February 5, 2009 at 8:46 am (programming, python) (, , , , , , )

One of my favorite aspects of Python is that it makes practicing TDD very easy. What makes it so frictionless is the doctest module. It allows you to write a test at the same time you define a function. No setup, no boilerplate, just write a function call and the expected output in the docstring. Here’s a quick example of a fibonacci function.

def fib(n):
        '''Return the nth fibonacci number.
        >>> fib(0)
        0
        >>> fib(1)
        1
        >>> fib(2)
        1
        >>> fib(3)
        2
        >>> fib(4)
        3
        '''
        if n == 0:
                return 0
        elif n == 1:
                return 1
        else:
                return fib(n - 1) + fib(n - 2)

If you want to run your doctests, just add the following three lines to the bottom of your module.

if __name__ == '__main__':
        import doctest
        doctest.testmod()

Now you can run your module to run the doctests, like python fib.py.

So how well does this fit in with the TDD philosophy? Here’s the basic TDD practices.

  1. Think about what you want to test
  2. Write a small test
  3. Write just enough code to fail the test
  4. Run the test and watch it fail
  5. Write just enough code to pass the test
  6. Run the test and watch it pass (if it fails, go back to step 4)
  7. Go back to step 1 and repeat until done

And now a step-by-step breakdown of how to do this with doctests, in excruciating detail.

1. Define a new empty method

def fib(n):
        '''Return the nth fibonacci number.'''
        pass

if __name__ == '__main__':
        import doctest
        doctest.testmod()

2. Write a doctest

def fib(n):
        '''Return the nth fibonacci number.
        >>> fib(0)
        0
        '''
        pass

3. Run the module and watch the doctest fail

python fib.py
**********************************************************************
File "fib1.py", line 3, in __main__.fib
Failed example:
    fib(0)
Expected:
    0
Got nothing
**********************************************************************
1 items had failures:
   1 of   1 in __main__.fib
***Test Failed*** 1 failures.

4. Write just enough code to pass the failing doctest

def fib(n):
        '''Return the nth fibonacci number.
        >>> fib(0)
        0
        '''
        return 0

5. Run the module and watch the doctest pass

python fib.py

6. Go back to step 2 and repeat

Now you can start filling in the rest of function, one test at time. In practice, you may not write code exactly like this, but the point is that doctests provide a really easy way to test your code as you write it.

Unit Tests

Ok, so doctests are great for simple tests. But what if your tests need to be a bit more complex? Maybe you need some external data, or mock objects. In that case, you’ll be better off with more traditional unit tests. But first, take a little time to see if you can decompose your code into a set of smaller functions that can be tested individually. I find that code that is easier to test is also easier to understand.

Running Tests

For running my tests, I use nose. I have a tests/ directory with a simple configuration file, nose.cfg

[nosetests]
verbosity=3
with-doctest=1

Then in my Makefile, I add a test command so I can run make test.

test:
        @nosetests --config=tests/nose.cfg tests PACKAGE1 PACKAGE2

PACKAGE1 and PACKAGE2 are optional paths to your code. They could point to unit test packages and/or production code containing doctests.

And finally, if you’re looking for a continuous integration server, try Buildbot.

Permalink 4 Comments

Static Analysis of Erlang Code with Dialyzer

January 15, 2009 at 7:21 pm (erlang) (, , )

Dialyzer is a tool that does static analysis of your erlang code. It’s great for identifying type errors and unreachable code. Here’s how to use it from the command line.

dialyzer -r PATH/TO/APP -I PATH/TO/INCLUDE

Pretty simple! PATH/TO/APP should be an erlang application directory containing your ebin/ and/or src/ directories. PATH/TO/INCLUDE should be a path to a directory that contains any .hrl files that need to be included. The -I is optional if you have no include files. You can have as many -r and -I options as you need. If you add -q, then dialyzer runs more quietly, succeeding silently or reporting any errors found.

If you have a test/ directory with Common Test suites, then you’ll want to add “-I /usr/lib/erlang/lib/test_server*/include/” and “-I /usr/lib/erlang/lib/common_test*/include/”. I’ve actually set this up in my Makefile to run as make check. It’s been great for catching bad return types, misspellings, and wrong function parameters.

Permalink Leave a Comment

Unit Testing with Erlang’s Common Test Framework

November 26, 2008 at 10:57 am (erlang) (, , , , )

One of the first things people look for when getting started with Erlang is a unit testing framework, and EUnit tends to be the framework of choice. But I always had trouble getting EUnit to play nice with my code since it does parse transforms, which screws up the handling of include files and record definitions. And because Erlang has pattern matching, there’s really no reason for assert macros. So I looked around for alternatives and found that a testing framework called common_test has been included since Erlang/OTP-R12B. common_test (and test_server), are much more heavy duty than EUnit, but don’t let that scare you away. Once you’ve set everything up, writing and running unit tests is quite painless.

Directory Setup

I’m going to assume an OTP compliant directory setup, specifically:

  1. a top level directory we’ll call project/
  2. a lib/ directory containing your applications at project/lib/
  3. application directories inside lib/, such as project/lib/app1/
  4. code files are in app1/src/ and beam files are in app1/ebin/

So we end up with a directory structure like this:

project/
    lib/
        app1/
            src/
            ebin/

Test Suites

Inside the app1/ directory, create a directory called test/. This is where your test suites will go. Generally, you’ll have 1 test suite per code module, so if you have app1/src/module1.erl, then you’ll create app1/test/module1_SUITE.erl for all your module1 unit tests. Each test suite should look something like this: (unfortunately, wordpress doesn’t do syntax highlighting for erlang, so it looks kinda crappy)

-module(module1_SUITE).

% easier than exporting by name
-compile(export_all).

% required for common_test to work
-include("ct.hrl").

%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% common test callbacks %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Specify a list of all unit test functions
all() -> [test1, test2].

% required, but can just return Config. this is a suite level setup function.
init_per_suite(Config) ->
    % do custom per suite setup here
    Config.

% required, but can just return Config. this is a suite level tear down function.
end_per_suite(Config) ->
    % do custom per suite cleanup here
    Config.

% optional, can do function level setup for all functions,
% or for individual functions by matching on TestCase.
init_per_testcase(TestCase, Config) ->
    % do custom test case setup here
    Config.

% optional, can do function level tear down for all functions,
% or for individual functions by matching on TestCase.
end_per_testcase(TestCase, Config) ->
    % do custom test case cleanup here
    Config.

%%%%%%%%%%%%%%%%
%% test cases %%
%%%%%%%%%%%%%%%%

test1(Config) ->
    % write standard erlang code to test whatever you want
    % use pattern matching to specify expected return values
    ok.

test2(Config) -> ok.

Test Specification

Now the we have a test suite at project/app1/test/module1_SUITE.erl, we can make a test specification so common_test knows where to find the test suites, and which suites to run. Something I found out that hard way is that common_test requires absolute paths in its test specifications. So instead of creating a file called test.spec, we’ll create a file called test.spec.in, and use make to generate the test.spec file with absolute paths.

test.spec.in

{logdir, "@PATH@/log"}.
{alias, app1, "@PATH@/lib/app1"}.
{suites, app1, [module1_SUITE]}.

Makefile

src:
    erl -pa lib/*/ebin -make

test.spec: test.spec.in
    cat test.spec.in | sed -e "s,@PATH@,$(PWD)," > $(PWD)/test.spec

test: test.spec src
    run_test -pa $(PWD)/lib/*/ebin -spec test.spec

Running the Tests

As you can see above, I also use the Makefile for running the tests with the command make test. For this command to work, run_test must be installed in your PATH. To do so, you need to run /usr/lib/erlang/lib/common_test-VERSION/install.sh (where VERSION is whatever version number you currently have). See the common_test installation instructions for more information. I’m also assuming you have an Emakefile for compiling the code in lib/app1/src/ with the make src command.

Final Thoughts

So there you have it, an example test suite, a test specification, and a Makefile for running the tests. The final file and directory structure should look something like this:

project/
    Emakefile
    Makefile
    test.spec.in
    lib/
        app1/
            src/
                module1.erl
            ebin/
            test/
                module1_SUITE.erl

Now all you need to do is write your unit tests in the form of test suites and add those suites to test.spec.in. There’s a lot more you can get out of common_test, such as code coverage analysis, HTML logging, and large scale testing. I’ll be covering some of those topics in the future, but for now I’ll end with some parting thoughts from the Common Test User’s Guide:

It’s not possible to prove that a program is correct by testing. On the contrary, it has been formally proven that it is impossible to prove programs in general by testing.

There are many kinds of test suites. Some concentrate on calling every function or command… Some other do the same, but uses all kinds of illegal parameters.

Aim for finding bugs. Write whatever test that has the highest probability of finding a bug, now or in the future. Concentrate more on the critical parts. Bugs in critical subsystems are a lot more expensive than others.

Aim for functionality testing rather than implementation details. Implementation details change quite often, and the test suites should be long lived.

Aim for testing everything once, no less, no more

Permalink 1 Comment

Follow

Get every new post delivered to your Inbox.